Oct  1 07:17:21 np0005464214 kernel: Linux version 5.14.0-617.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025
Oct  1 07:17:21 np0005464214 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  1 07:17:21 np0005464214 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  1 07:17:21 np0005464214 kernel: BIOS-provided physical RAM map:
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  1 07:17:21 np0005464214 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  1 07:17:21 np0005464214 kernel: NX (Execute Disable) protection: active
Oct  1 07:17:21 np0005464214 kernel: APIC: Static calls initialized
Oct  1 07:17:21 np0005464214 kernel: SMBIOS 2.8 present.
Oct  1 07:17:21 np0005464214 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  1 07:17:21 np0005464214 kernel: Hypervisor detected: KVM
Oct  1 07:17:21 np0005464214 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  1 07:17:21 np0005464214 kernel: kvm-clock: using sched offset of 4113748075 cycles
Oct  1 07:17:21 np0005464214 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  1 07:17:21 np0005464214 kernel: tsc: Detected 2800.000 MHz processor
Oct  1 07:17:21 np0005464214 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  1 07:17:21 np0005464214 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  1 07:17:21 np0005464214 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  1 07:17:21 np0005464214 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  1 07:17:21 np0005464214 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  1 07:17:21 np0005464214 kernel: Using GB pages for direct mapping
Oct  1 07:17:21 np0005464214 kernel: RAMDISK: [mem 0x2d7d0000-0x32bdffff]
Oct  1 07:17:21 np0005464214 kernel: ACPI: Early table checksum verification disabled
Oct  1 07:17:21 np0005464214 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  1 07:17:21 np0005464214 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 07:17:21 np0005464214 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 07:17:21 np0005464214 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 07:17:21 np0005464214 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  1 07:17:21 np0005464214 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 07:17:21 np0005464214 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  1 07:17:21 np0005464214 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  1 07:17:21 np0005464214 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  1 07:17:21 np0005464214 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  1 07:17:21 np0005464214 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  1 07:17:21 np0005464214 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  1 07:17:21 np0005464214 kernel: No NUMA configuration found
Oct  1 07:17:21 np0005464214 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  1 07:17:21 np0005464214 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct  1 07:17:21 np0005464214 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  1 07:17:21 np0005464214 kernel: Zone ranges:
Oct  1 07:17:21 np0005464214 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  1 07:17:21 np0005464214 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  1 07:17:21 np0005464214 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  1 07:17:21 np0005464214 kernel:  Device   empty
Oct  1 07:17:21 np0005464214 kernel: Movable zone start for each node
Oct  1 07:17:21 np0005464214 kernel: Early memory node ranges
Oct  1 07:17:21 np0005464214 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  1 07:17:21 np0005464214 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  1 07:17:21 np0005464214 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  1 07:17:21 np0005464214 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  1 07:17:21 np0005464214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  1 07:17:21 np0005464214 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  1 07:17:21 np0005464214 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  1 07:17:21 np0005464214 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  1 07:17:21 np0005464214 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  1 07:17:21 np0005464214 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  1 07:17:21 np0005464214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  1 07:17:21 np0005464214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  1 07:17:21 np0005464214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  1 07:17:21 np0005464214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  1 07:17:21 np0005464214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  1 07:17:21 np0005464214 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  1 07:17:21 np0005464214 kernel: TSC deadline timer available
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Max. logical packages:   8
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Max. logical dies:       8
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Max. dies per package:   1
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Max. threads per core:   1
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Num. cores per package:     1
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Num. threads per package:   1
Oct  1 07:17:21 np0005464214 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  1 07:17:21 np0005464214 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  1 07:17:21 np0005464214 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  1 07:17:21 np0005464214 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  1 07:17:21 np0005464214 kernel: Booting paravirtualized kernel on KVM
Oct  1 07:17:21 np0005464214 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  1 07:17:21 np0005464214 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  1 07:17:21 np0005464214 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  1 07:17:21 np0005464214 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  1 07:17:21 np0005464214 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  1 07:17:21 np0005464214 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64", will be passed to user space.
Oct  1 07:17:21 np0005464214 kernel: random: crng init done
Oct  1 07:17:21 np0005464214 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: Fallback order for Node 0: 0 
Oct  1 07:17:21 np0005464214 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  1 07:17:21 np0005464214 kernel: Policy zone: Normal
Oct  1 07:17:21 np0005464214 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  1 07:17:21 np0005464214 kernel: software IO TLB: area num 8.
Oct  1 07:17:21 np0005464214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  1 07:17:21 np0005464214 kernel: ftrace: allocating 49329 entries in 193 pages
Oct  1 07:17:21 np0005464214 kernel: ftrace: allocated 193 pages with 3 groups
Oct  1 07:17:21 np0005464214 kernel: Dynamic Preempt: voluntary
Oct  1 07:17:21 np0005464214 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  1 07:17:21 np0005464214 kernel: rcu: #011RCU event tracing is enabled.
Oct  1 07:17:21 np0005464214 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  1 07:17:21 np0005464214 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  1 07:17:21 np0005464214 kernel: #011Rude variant of Tasks RCU enabled.
Oct  1 07:17:21 np0005464214 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  1 07:17:21 np0005464214 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  1 07:17:21 np0005464214 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  1 07:17:21 np0005464214 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  1 07:17:21 np0005464214 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  1 07:17:21 np0005464214 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  1 07:17:21 np0005464214 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  1 07:17:21 np0005464214 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  1 07:17:21 np0005464214 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  1 07:17:21 np0005464214 kernel: Console: colour VGA+ 80x25
Oct  1 07:17:21 np0005464214 kernel: printk: console [ttyS0] enabled
Oct  1 07:17:21 np0005464214 kernel: ACPI: Core revision 20230331
Oct  1 07:17:21 np0005464214 kernel: APIC: Switch to symmetric I/O mode setup
Oct  1 07:17:21 np0005464214 kernel: x2apic enabled
Oct  1 07:17:21 np0005464214 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  1 07:17:21 np0005464214 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  1 07:17:21 np0005464214 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct  1 07:17:21 np0005464214 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  1 07:17:21 np0005464214 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  1 07:17:21 np0005464214 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  1 07:17:21 np0005464214 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  1 07:17:21 np0005464214 kernel: Spectre V2 : Mitigation: Retpolines
Oct  1 07:17:21 np0005464214 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  1 07:17:21 np0005464214 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  1 07:17:21 np0005464214 kernel: RETBleed: Mitigation: untrained return thunk
Oct  1 07:17:21 np0005464214 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  1 07:17:21 np0005464214 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  1 07:17:21 np0005464214 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  1 07:17:21 np0005464214 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  1 07:17:21 np0005464214 kernel: x86/bugs: return thunk changed
Oct  1 07:17:21 np0005464214 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  1 07:17:21 np0005464214 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  1 07:17:21 np0005464214 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  1 07:17:21 np0005464214 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  1 07:17:21 np0005464214 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  1 07:17:21 np0005464214 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  1 07:17:21 np0005464214 kernel: Freeing SMP alternatives memory: 40K
Oct  1 07:17:21 np0005464214 kernel: pid_max: default: 32768 minimum: 301
Oct  1 07:17:21 np0005464214 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  1 07:17:21 np0005464214 kernel: landlock: Up and running.
Oct  1 07:17:21 np0005464214 kernel: Yama: becoming mindful.
Oct  1 07:17:21 np0005464214 kernel: SELinux:  Initializing.
Oct  1 07:17:21 np0005464214 kernel: LSM support for eBPF active
Oct  1 07:17:21 np0005464214 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  1 07:17:21 np0005464214 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  1 07:17:21 np0005464214 kernel: ... version:                0
Oct  1 07:17:21 np0005464214 kernel: ... bit width:              48
Oct  1 07:17:21 np0005464214 kernel: ... generic registers:      6
Oct  1 07:17:21 np0005464214 kernel: ... value mask:             0000ffffffffffff
Oct  1 07:17:21 np0005464214 kernel: ... max period:             00007fffffffffff
Oct  1 07:17:21 np0005464214 kernel: ... fixed-purpose events:   0
Oct  1 07:17:21 np0005464214 kernel: ... event mask:             000000000000003f
Oct  1 07:17:21 np0005464214 kernel: signal: max sigframe size: 1776
Oct  1 07:17:21 np0005464214 kernel: rcu: Hierarchical SRCU implementation.
Oct  1 07:17:21 np0005464214 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  1 07:17:21 np0005464214 kernel: smp: Bringing up secondary CPUs ...
Oct  1 07:17:21 np0005464214 kernel: smpboot: x86: Booting SMP configuration:
Oct  1 07:17:21 np0005464214 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  1 07:17:21 np0005464214 kernel: smp: Brought up 1 node, 8 CPUs
Oct  1 07:17:21 np0005464214 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct  1 07:17:21 np0005464214 kernel: node 0 deferred pages initialised in 23ms
Oct  1 07:17:21 np0005464214 kernel: Memory: 7765416K/8388068K available (16384K kernel code, 5784K rwdata, 13988K rodata, 4072K init, 7304K bss, 616492K reserved, 0K cma-reserved)
Oct  1 07:17:21 np0005464214 kernel: devtmpfs: initialized
Oct  1 07:17:21 np0005464214 kernel: x86/mm: Memory block size: 128MB
Oct  1 07:17:21 np0005464214 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  1 07:17:21 np0005464214 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: pinctrl core: initialized pinctrl subsystem
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  1 07:17:21 np0005464214 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  1 07:17:21 np0005464214 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  1 07:17:21 np0005464214 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  1 07:17:21 np0005464214 kernel: audit: initializing netlink subsys (disabled)
Oct  1 07:17:21 np0005464214 kernel: audit: type=2000 audit(1759317439.555:1): state=initialized audit_enabled=0 res=1
Oct  1 07:17:21 np0005464214 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  1 07:17:21 np0005464214 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  1 07:17:21 np0005464214 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  1 07:17:21 np0005464214 kernel: cpuidle: using governor menu
Oct  1 07:17:21 np0005464214 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  1 07:17:21 np0005464214 kernel: PCI: Using configuration type 1 for base access
Oct  1 07:17:21 np0005464214 kernel: PCI: Using configuration type 1 for extended access
Oct  1 07:17:21 np0005464214 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  1 07:17:21 np0005464214 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  1 07:17:21 np0005464214 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  1 07:17:21 np0005464214 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  1 07:17:21 np0005464214 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  1 07:17:21 np0005464214 kernel: Demotion targets for Node 0: null
Oct  1 07:17:21 np0005464214 kernel: cryptd: max_cpu_qlen set to 1000
Oct  1 07:17:21 np0005464214 kernel: ACPI: Added _OSI(Module Device)
Oct  1 07:17:21 np0005464214 kernel: ACPI: Added _OSI(Processor Device)
Oct  1 07:17:21 np0005464214 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  1 07:17:21 np0005464214 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  1 07:17:21 np0005464214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  1 07:17:21 np0005464214 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  1 07:17:21 np0005464214 kernel: ACPI: Interpreter enabled
Oct  1 07:17:21 np0005464214 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  1 07:17:21 np0005464214 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  1 07:17:21 np0005464214 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  1 07:17:21 np0005464214 kernel: PCI: Using E820 reservations for host bridge windows
Oct  1 07:17:21 np0005464214 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  1 07:17:21 np0005464214 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  1 07:17:21 np0005464214 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [3] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [4] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [5] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [6] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [7] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [8] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [9] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [10] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [11] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [12] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [13] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [14] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [15] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [16] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [17] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [18] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [19] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [20] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [21] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [22] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [23] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [24] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [25] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [26] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [27] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [28] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [29] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [30] registered
Oct  1 07:17:21 np0005464214 kernel: acpiphp: Slot [31] registered
Oct  1 07:17:21 np0005464214 kernel: PCI host bridge to bus 0000:00
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  1 07:17:21 np0005464214 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  1 07:17:21 np0005464214 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  1 07:17:21 np0005464214 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  1 07:17:21 np0005464214 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  1 07:17:21 np0005464214 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  1 07:17:21 np0005464214 kernel: iommu: Default domain type: Translated
Oct  1 07:17:21 np0005464214 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  1 07:17:21 np0005464214 kernel: SCSI subsystem initialized
Oct  1 07:17:21 np0005464214 kernel: ACPI: bus type USB registered
Oct  1 07:17:21 np0005464214 kernel: usbcore: registered new interface driver usbfs
Oct  1 07:17:21 np0005464214 kernel: usbcore: registered new interface driver hub
Oct  1 07:17:21 np0005464214 kernel: usbcore: registered new device driver usb
Oct  1 07:17:21 np0005464214 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  1 07:17:21 np0005464214 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  1 07:17:21 np0005464214 kernel: PTP clock support registered
Oct  1 07:17:21 np0005464214 kernel: EDAC MC: Ver: 3.0.0
Oct  1 07:17:21 np0005464214 kernel: NetLabel: Initializing
Oct  1 07:17:21 np0005464214 kernel: NetLabel:  domain hash size = 128
Oct  1 07:17:21 np0005464214 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  1 07:17:21 np0005464214 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  1 07:17:21 np0005464214 kernel: PCI: Using ACPI for IRQ routing
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  1 07:17:21 np0005464214 kernel: vgaarb: loaded
Oct  1 07:17:21 np0005464214 kernel: clocksource: Switched to clocksource kvm-clock
Oct  1 07:17:21 np0005464214 kernel: VFS: Disk quotas dquot_6.6.0
Oct  1 07:17:21 np0005464214 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  1 07:17:21 np0005464214 kernel: pnp: PnP ACPI init
Oct  1 07:17:21 np0005464214 kernel: pnp: PnP ACPI: found 5 devices
Oct  1 07:17:21 np0005464214 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_INET protocol family
Oct  1 07:17:21 np0005464214 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  1 07:17:21 np0005464214 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_XDP protocol family
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  1 07:17:21 np0005464214 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  1 07:17:21 np0005464214 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  1 07:17:21 np0005464214 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 99463 usecs
Oct  1 07:17:21 np0005464214 kernel: PCI: CLS 0 bytes, default 64
Oct  1 07:17:21 np0005464214 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  1 07:17:21 np0005464214 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  1 07:17:21 np0005464214 kernel: ACPI: bus type thunderbolt registered
Oct  1 07:17:21 np0005464214 kernel: Trying to unpack rootfs image as initramfs...
Oct  1 07:17:21 np0005464214 kernel: Initialise system trusted keyrings
Oct  1 07:17:21 np0005464214 kernel: Key type blacklist registered
Oct  1 07:17:21 np0005464214 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  1 07:17:21 np0005464214 kernel: zbud: loaded
Oct  1 07:17:21 np0005464214 kernel: integrity: Platform Keyring initialized
Oct  1 07:17:21 np0005464214 kernel: integrity: Machine keyring initialized
Oct  1 07:17:21 np0005464214 kernel: Freeing initrd memory: 86080K
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_ALG protocol family
Oct  1 07:17:21 np0005464214 kernel: xor: automatically using best checksumming function   avx       
Oct  1 07:17:21 np0005464214 kernel: Key type asymmetric registered
Oct  1 07:17:21 np0005464214 kernel: Asymmetric key parser 'x509' registered
Oct  1 07:17:21 np0005464214 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  1 07:17:21 np0005464214 kernel: io scheduler mq-deadline registered
Oct  1 07:17:21 np0005464214 kernel: io scheduler kyber registered
Oct  1 07:17:21 np0005464214 kernel: io scheduler bfq registered
Oct  1 07:17:21 np0005464214 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  1 07:17:21 np0005464214 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  1 07:17:21 np0005464214 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  1 07:17:21 np0005464214 kernel: ACPI: button: Power Button [PWRF]
Oct  1 07:17:21 np0005464214 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  1 07:17:21 np0005464214 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  1 07:17:21 np0005464214 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  1 07:17:21 np0005464214 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  1 07:17:21 np0005464214 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  1 07:17:21 np0005464214 kernel: Non-volatile memory driver v1.3
Oct  1 07:17:21 np0005464214 kernel: rdac: device handler registered
Oct  1 07:17:21 np0005464214 kernel: hp_sw: device handler registered
Oct  1 07:17:21 np0005464214 kernel: emc: device handler registered
Oct  1 07:17:21 np0005464214 kernel: alua: device handler registered
Oct  1 07:17:21 np0005464214 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  1 07:17:21 np0005464214 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  1 07:17:21 np0005464214 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  1 07:17:21 np0005464214 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  1 07:17:21 np0005464214 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  1 07:17:21 np0005464214 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  1 07:17:21 np0005464214 kernel: usb usb1: Product: UHCI Host Controller
Oct  1 07:17:21 np0005464214 kernel: usb usb1: Manufacturer: Linux 5.14.0-617.el9.x86_64 uhci_hcd
Oct  1 07:17:21 np0005464214 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  1 07:17:21 np0005464214 kernel: hub 1-0:1.0: USB hub found
Oct  1 07:17:21 np0005464214 kernel: hub 1-0:1.0: 2 ports detected
Oct  1 07:17:21 np0005464214 kernel: usbcore: registered new interface driver usbserial_generic
Oct  1 07:17:21 np0005464214 kernel: usbserial: USB Serial support registered for generic
Oct  1 07:17:21 np0005464214 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  1 07:17:21 np0005464214 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  1 07:17:21 np0005464214 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  1 07:17:21 np0005464214 kernel: mousedev: PS/2 mouse device common for all mice
Oct  1 07:17:21 np0005464214 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  1 07:17:21 np0005464214 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  1 07:17:21 np0005464214 kernel: rtc_cmos 00:04: registered as rtc0
Oct  1 07:17:21 np0005464214 kernel: rtc_cmos 00:04: setting system clock to 2025-10-01T11:17:20 UTC (1759317440)
Oct  1 07:17:21 np0005464214 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  1 07:17:21 np0005464214 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  1 07:17:21 np0005464214 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  1 07:17:21 np0005464214 kernel: usbcore: registered new interface driver usbhid
Oct  1 07:17:21 np0005464214 kernel: usbhid: USB HID core driver
Oct  1 07:17:21 np0005464214 kernel: drop_monitor: Initializing network drop monitor service
Oct  1 07:17:21 np0005464214 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  1 07:17:21 np0005464214 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  1 07:17:21 np0005464214 kernel: Initializing XFRM netlink socket
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_INET6 protocol family
Oct  1 07:17:21 np0005464214 kernel: Segment Routing with IPv6
Oct  1 07:17:21 np0005464214 kernel: NET: Registered PF_PACKET protocol family
Oct  1 07:17:21 np0005464214 kernel: mpls_gso: MPLS GSO support
Oct  1 07:17:21 np0005464214 kernel: IPI shorthand broadcast: enabled
Oct  1 07:17:21 np0005464214 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  1 07:17:21 np0005464214 kernel: AES CTR mode by8 optimization enabled
Oct  1 07:17:21 np0005464214 kernel: sched_clock: Marking stable (1269003682, 138896220)->(1526996574, -119096672)
Oct  1 07:17:21 np0005464214 kernel: registered taskstats version 1
Oct  1 07:17:21 np0005464214 kernel: Loading compiled-in X.509 certificates
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  1 07:17:21 np0005464214 kernel: Demotion targets for Node 0: null
Oct  1 07:17:21 np0005464214 kernel: page_owner is disabled
Oct  1 07:17:21 np0005464214 kernel: Key type .fscrypt registered
Oct  1 07:17:21 np0005464214 kernel: Key type fscrypt-provisioning registered
Oct  1 07:17:21 np0005464214 kernel: Key type big_key registered
Oct  1 07:17:21 np0005464214 kernel: Key type encrypted registered
Oct  1 07:17:21 np0005464214 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  1 07:17:21 np0005464214 kernel: Loading compiled-in module X.509 certificates
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Oct  1 07:17:21 np0005464214 kernel: ima: Allocated hash algorithm: sha256
Oct  1 07:17:21 np0005464214 kernel: ima: No architecture policies found
Oct  1 07:17:21 np0005464214 kernel: evm: Initialising EVM extended attributes:
Oct  1 07:17:21 np0005464214 kernel: evm: security.selinux
Oct  1 07:17:21 np0005464214 kernel: evm: security.SMACK64 (disabled)
Oct  1 07:17:21 np0005464214 kernel: evm: security.SMACK64EXEC (disabled)
Oct  1 07:17:21 np0005464214 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  1 07:17:21 np0005464214 kernel: evm: security.SMACK64MMAP (disabled)
Oct  1 07:17:21 np0005464214 kernel: evm: security.apparmor (disabled)
Oct  1 07:17:21 np0005464214 kernel: evm: security.ima
Oct  1 07:17:21 np0005464214 kernel: evm: security.capability
Oct  1 07:17:21 np0005464214 kernel: evm: HMAC attrs: 0x1
Oct  1 07:17:21 np0005464214 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  1 07:17:21 np0005464214 kernel: Running certificate verification RSA selftest
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  1 07:17:21 np0005464214 kernel: Running certificate verification ECDSA selftest
Oct  1 07:17:21 np0005464214 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  1 07:17:21 np0005464214 kernel: clk: Disabling unused clocks
Oct  1 07:17:21 np0005464214 kernel: Freeing unused decrypted memory: 2028K
Oct  1 07:17:21 np0005464214 kernel: Freeing unused kernel image (initmem) memory: 4072K
Oct  1 07:17:21 np0005464214 kernel: Write protecting the kernel read-only data: 30720k
Oct  1 07:17:21 np0005464214 kernel: Freeing unused kernel image (rodata/data gap) memory: 348K
Oct  1 07:17:21 np0005464214 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  1 07:17:21 np0005464214 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  1 07:17:21 np0005464214 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  1 07:17:21 np0005464214 kernel: usb 1-1: Manufacturer: QEMU
Oct  1 07:17:21 np0005464214 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  1 07:17:21 np0005464214 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  1 07:17:21 np0005464214 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  1 07:17:21 np0005464214 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  1 07:17:21 np0005464214 kernel: Run /init as init process
Oct  1 07:17:21 np0005464214 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  1 07:17:21 np0005464214 systemd: Detected virtualization kvm.
Oct  1 07:17:21 np0005464214 systemd: Detected architecture x86-64.
Oct  1 07:17:21 np0005464214 systemd: Running in initrd.
Oct  1 07:17:21 np0005464214 systemd: No hostname configured, using default hostname.
Oct  1 07:17:21 np0005464214 systemd: Hostname set to <localhost>.
Oct  1 07:17:21 np0005464214 systemd: Initializing machine ID from VM UUID.
Oct  1 07:17:21 np0005464214 systemd: Queued start job for default target Initrd Default Target.
Oct  1 07:17:21 np0005464214 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  1 07:17:21 np0005464214 systemd: Reached target Local Encrypted Volumes.
Oct  1 07:17:21 np0005464214 systemd: Reached target Initrd /usr File System.
Oct  1 07:17:21 np0005464214 systemd: Reached target Local File Systems.
Oct  1 07:17:21 np0005464214 systemd: Reached target Path Units.
Oct  1 07:17:21 np0005464214 systemd: Reached target Slice Units.
Oct  1 07:17:21 np0005464214 systemd: Reached target Swaps.
Oct  1 07:17:21 np0005464214 systemd: Reached target Timer Units.
Oct  1 07:17:21 np0005464214 systemd: Listening on D-Bus System Message Bus Socket.
Oct  1 07:17:21 np0005464214 systemd: Listening on Journal Socket (/dev/log).
Oct  1 07:17:21 np0005464214 systemd: Listening on Journal Socket.
Oct  1 07:17:21 np0005464214 systemd: Listening on udev Control Socket.
Oct  1 07:17:21 np0005464214 systemd: Listening on udev Kernel Socket.
Oct  1 07:17:21 np0005464214 systemd: Reached target Socket Units.
Oct  1 07:17:21 np0005464214 systemd: Starting Create List of Static Device Nodes...
Oct  1 07:17:21 np0005464214 systemd: Starting Journal Service...
Oct  1 07:17:21 np0005464214 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  1 07:17:21 np0005464214 systemd: Starting Apply Kernel Variables...
Oct  1 07:17:21 np0005464214 systemd: Starting Create System Users...
Oct  1 07:17:21 np0005464214 systemd: Starting Setup Virtual Console...
Oct  1 07:17:21 np0005464214 systemd: Finished Create List of Static Device Nodes.
Oct  1 07:17:21 np0005464214 systemd: Finished Apply Kernel Variables.
Oct  1 07:17:21 np0005464214 systemd-journald[307]: Journal started
Oct  1 07:17:21 np0005464214 systemd-journald[307]: Runtime Journal (/run/log/journal/adf090e1fe934ff6a8f54224f2f21059) is 8.0M, max 153.5M, 145.5M free.
Oct  1 07:17:21 np0005464214 systemd-sysusers[311]: Creating group 'users' with GID 100.
Oct  1 07:17:21 np0005464214 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Oct  1 07:17:21 np0005464214 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  1 07:17:21 np0005464214 systemd: Started Journal Service.
Oct  1 07:17:21 np0005464214 systemd[1]: Finished Create System Users.
Oct  1 07:17:21 np0005464214 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  1 07:17:21 np0005464214 systemd[1]: Starting Create Volatile Files and Directories...
Oct  1 07:17:21 np0005464214 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  1 07:17:21 np0005464214 systemd[1]: Finished Create Volatile Files and Directories.
Oct  1 07:17:21 np0005464214 systemd[1]: Finished Setup Virtual Console.
Oct  1 07:17:21 np0005464214 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  1 07:17:21 np0005464214 systemd[1]: Starting dracut cmdline hook...
Oct  1 07:17:21 np0005464214 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Oct  1 07:17:21 np0005464214 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  1 07:17:21 np0005464214 systemd[1]: Finished dracut cmdline hook.
Oct  1 07:17:21 np0005464214 systemd[1]: Starting dracut pre-udev hook...
Oct  1 07:17:21 np0005464214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  1 07:17:21 np0005464214 kernel: device-mapper: uevent: version 1.0.3
Oct  1 07:17:21 np0005464214 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  1 07:17:21 np0005464214 kernel: RPC: Registered named UNIX socket transport module.
Oct  1 07:17:21 np0005464214 kernel: RPC: Registered udp transport module.
Oct  1 07:17:21 np0005464214 kernel: RPC: Registered tcp transport module.
Oct  1 07:17:21 np0005464214 kernel: RPC: Registered tcp-with-tls transport module.
Oct  1 07:17:21 np0005464214 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  1 07:17:21 np0005464214 rpc.statd[443]: Version 2.5.4 starting
Oct  1 07:17:21 np0005464214 rpc.statd[443]: Initializing NSM state
Oct  1 07:17:21 np0005464214 rpc.idmapd[448]: Setting log level to 0
Oct  1 07:17:21 np0005464214 systemd[1]: Finished dracut pre-udev hook.
Oct  1 07:17:21 np0005464214 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  1 07:17:22 np0005464214 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct  1 07:17:22 np0005464214 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  1 07:17:22 np0005464214 systemd[1]: Starting dracut pre-trigger hook...
Oct  1 07:17:22 np0005464214 systemd[1]: Finished dracut pre-trigger hook.
Oct  1 07:17:22 np0005464214 systemd[1]: Starting Coldplug All udev Devices...
Oct  1 07:17:22 np0005464214 systemd[1]: Created slice Slice /system/modprobe.
Oct  1 07:17:22 np0005464214 systemd[1]: Starting Load Kernel Module configfs...
Oct  1 07:17:22 np0005464214 systemd[1]: Finished Coldplug All udev Devices.
Oct  1 07:17:22 np0005464214 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target Network.
Oct  1 07:17:22 np0005464214 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  1 07:17:22 np0005464214 systemd[1]: Starting dracut initqueue hook...
Oct  1 07:17:22 np0005464214 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  1 07:17:22 np0005464214 systemd[1]: Finished Load Kernel Module configfs.
Oct  1 07:17:22 np0005464214 systemd[1]: Mounting Kernel Configuration File System...
Oct  1 07:17:22 np0005464214 systemd[1]: Mounted Kernel Configuration File System.
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target System Initialization.
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target Basic System.
Oct  1 07:17:22 np0005464214 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  1 07:17:22 np0005464214 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  1 07:17:22 np0005464214 kernel: vda: vda1
Oct  1 07:17:22 np0005464214 kernel: scsi host0: ata_piix
Oct  1 07:17:22 np0005464214 kernel: scsi host1: ata_piix
Oct  1 07:17:22 np0005464214 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  1 07:17:22 np0005464214 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  1 07:17:22 np0005464214 systemd-udevd[463]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 07:17:22 np0005464214 systemd[1]: Found device /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target Initrd Root Device.
Oct  1 07:17:22 np0005464214 kernel: ata1: found unknown device (class 0)
Oct  1 07:17:22 np0005464214 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  1 07:17:22 np0005464214 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  1 07:17:22 np0005464214 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  1 07:17:22 np0005464214 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  1 07:17:22 np0005464214 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  1 07:17:22 np0005464214 systemd[1]: Finished dracut initqueue hook.
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  1 07:17:22 np0005464214 systemd[1]: Reached target Remote File Systems.
Oct  1 07:17:22 np0005464214 systemd[1]: Starting dracut pre-mount hook...
Oct  1 07:17:22 np0005464214 systemd[1]: Finished dracut pre-mount hook.
Oct  1 07:17:22 np0005464214 systemd[1]: Starting File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8...
Oct  1 07:17:22 np0005464214 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Oct  1 07:17:22 np0005464214 systemd[1]: Finished File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Oct  1 07:17:22 np0005464214 systemd[1]: Mounting /sysroot...
Oct  1 07:17:23 np0005464214 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  1 07:17:23 np0005464214 kernel: XFS (vda1): Mounting V5 Filesystem d6a81468-b74c-4055-b485-def635ab40f8
Oct  1 07:17:23 np0005464214 kernel: XFS (vda1): Ending clean mount
Oct  1 07:17:23 np0005464214 systemd[1]: Mounted /sysroot.
Oct  1 07:17:23 np0005464214 systemd[1]: Reached target Initrd Root File System.
Oct  1 07:17:23 np0005464214 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  1 07:17:23 np0005464214 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  1 07:17:23 np0005464214 systemd[1]: Reached target Initrd File Systems.
Oct  1 07:17:23 np0005464214 systemd[1]: Reached target Initrd Default Target.
Oct  1 07:17:23 np0005464214 systemd[1]: Starting dracut mount hook...
Oct  1 07:17:23 np0005464214 systemd[1]: Finished dracut mount hook.
Oct  1 07:17:23 np0005464214 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  1 07:17:23 np0005464214 rpc.idmapd[448]: exiting on signal 15
Oct  1 07:17:23 np0005464214 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  1 07:17:23 np0005464214 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Network.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Timer Units.
Oct  1 07:17:23 np0005464214 systemd[1]: dbus.socket: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Initrd Default Target.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Basic System.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Initrd Root Device.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Initrd /usr File System.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Path Units.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Remote File Systems.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Slice Units.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Socket Units.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target System Initialization.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Local File Systems.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Swaps.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut mount hook.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut pre-mount hook.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut initqueue hook.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Apply Kernel Variables.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Coldplug All udev Devices.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut pre-trigger hook.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Setup Virtual Console.
Oct  1 07:17:23 np0005464214 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-udevd.service: Consumed 1.056s CPU time.
Oct  1 07:17:23 np0005464214 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Closed udev Control Socket.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Closed udev Kernel Socket.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut pre-udev hook.
Oct  1 07:17:23 np0005464214 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped dracut cmdline hook.
Oct  1 07:17:23 np0005464214 systemd[1]: Starting Cleanup udev Database...
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  1 07:17:23 np0005464214 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  1 07:17:23 np0005464214 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Stopped Create System Users.
Oct  1 07:17:23 np0005464214 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  1 07:17:23 np0005464214 systemd[1]: Finished Cleanup udev Database.
Oct  1 07:17:23 np0005464214 systemd[1]: Reached target Switch Root.
Oct  1 07:17:23 np0005464214 systemd[1]: Starting Switch Root...
Oct  1 07:17:23 np0005464214 systemd[1]: Switching root.
Oct  1 07:17:23 np0005464214 systemd-journald[307]: Journal stopped
Oct  1 07:17:24 np0005464214 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  1 07:17:24 np0005464214 kernel: audit: type=1404 audit(1759317443.926:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 07:17:24 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 07:17:24 np0005464214 kernel: audit: type=1403 audit(1759317444.091:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  1 07:17:24 np0005464214 systemd: Successfully loaded SELinux policy in 170.312ms.
Oct  1 07:17:24 np0005464214 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.850ms.
Oct  1 07:17:24 np0005464214 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  1 07:17:24 np0005464214 systemd: Detected virtualization kvm.
Oct  1 07:17:24 np0005464214 systemd: Detected architecture x86-64.
Oct  1 07:17:24 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 07:17:24 np0005464214 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd: Stopped Switch Root.
Oct  1 07:17:24 np0005464214 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  1 07:17:24 np0005464214 systemd: Created slice Slice /system/getty.
Oct  1 07:17:24 np0005464214 systemd: Created slice Slice /system/serial-getty.
Oct  1 07:17:24 np0005464214 systemd: Created slice Slice /system/sshd-keygen.
Oct  1 07:17:24 np0005464214 systemd: Created slice User and Session Slice.
Oct  1 07:17:24 np0005464214 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  1 07:17:24 np0005464214 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  1 07:17:24 np0005464214 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  1 07:17:24 np0005464214 systemd: Reached target Local Encrypted Volumes.
Oct  1 07:17:24 np0005464214 systemd: Stopped target Switch Root.
Oct  1 07:17:24 np0005464214 systemd: Stopped target Initrd File Systems.
Oct  1 07:17:24 np0005464214 systemd: Stopped target Initrd Root File System.
Oct  1 07:17:24 np0005464214 systemd: Reached target Local Integrity Protected Volumes.
Oct  1 07:17:24 np0005464214 systemd: Reached target Path Units.
Oct  1 07:17:24 np0005464214 systemd: Reached target rpc_pipefs.target.
Oct  1 07:17:24 np0005464214 systemd: Reached target Slice Units.
Oct  1 07:17:24 np0005464214 systemd: Reached target Swaps.
Oct  1 07:17:24 np0005464214 systemd: Reached target Local Verity Protected Volumes.
Oct  1 07:17:24 np0005464214 systemd: Listening on RPCbind Server Activation Socket.
Oct  1 07:17:24 np0005464214 systemd: Reached target RPC Port Mapper.
Oct  1 07:17:24 np0005464214 systemd: Listening on Process Core Dump Socket.
Oct  1 07:17:24 np0005464214 systemd: Listening on initctl Compatibility Named Pipe.
Oct  1 07:17:24 np0005464214 systemd: Listening on udev Control Socket.
Oct  1 07:17:24 np0005464214 systemd: Listening on udev Kernel Socket.
Oct  1 07:17:24 np0005464214 systemd: Mounting Huge Pages File System...
Oct  1 07:17:24 np0005464214 systemd: Mounting POSIX Message Queue File System...
Oct  1 07:17:24 np0005464214 systemd: Mounting Kernel Debug File System...
Oct  1 07:17:24 np0005464214 systemd: Mounting Kernel Trace File System...
Oct  1 07:17:24 np0005464214 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  1 07:17:24 np0005464214 systemd: Starting Create List of Static Device Nodes...
Oct  1 07:17:24 np0005464214 systemd: Starting Load Kernel Module configfs...
Oct  1 07:17:24 np0005464214 systemd: Starting Load Kernel Module drm...
Oct  1 07:17:24 np0005464214 systemd: Starting Load Kernel Module efi_pstore...
Oct  1 07:17:24 np0005464214 systemd: Starting Load Kernel Module fuse...
Oct  1 07:17:24 np0005464214 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  1 07:17:24 np0005464214 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd: Stopped File System Check on Root Device.
Oct  1 07:17:24 np0005464214 systemd: Stopped Journal Service.
Oct  1 07:17:24 np0005464214 systemd: Starting Journal Service...
Oct  1 07:17:24 np0005464214 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  1 07:17:24 np0005464214 systemd: Starting Generate network units from Kernel command line...
Oct  1 07:17:24 np0005464214 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  1 07:17:24 np0005464214 kernel: fuse: init (API version 7.37)
Oct  1 07:17:24 np0005464214 systemd: Starting Remount Root and Kernel File Systems...
Oct  1 07:17:24 np0005464214 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  1 07:17:24 np0005464214 systemd: Starting Apply Kernel Variables...
Oct  1 07:17:24 np0005464214 systemd: Starting Coldplug All udev Devices...
Oct  1 07:17:24 np0005464214 systemd-journald[679]: Journal started
Oct  1 07:17:24 np0005464214 systemd-journald[679]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Oct  1 07:17:24 np0005464214 systemd[1]: Queued start job for default target Multi-User System.
Oct  1 07:17:24 np0005464214 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd: Started Journal Service.
Oct  1 07:17:24 np0005464214 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  1 07:17:24 np0005464214 systemd[1]: Mounted Huge Pages File System.
Oct  1 07:17:24 np0005464214 systemd[1]: Mounted POSIX Message Queue File System.
Oct  1 07:17:24 np0005464214 systemd[1]: Mounted Kernel Debug File System.
Oct  1 07:17:24 np0005464214 systemd[1]: Mounted Kernel Trace File System.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Create List of Static Device Nodes.
Oct  1 07:17:24 np0005464214 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Load Kernel Module configfs.
Oct  1 07:17:24 np0005464214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  1 07:17:24 np0005464214 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Load Kernel Module fuse.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Generate network units from Kernel command line.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Apply Kernel Variables.
Oct  1 07:17:24 np0005464214 systemd[1]: Mounting FUSE Control File System...
Oct  1 07:17:24 np0005464214 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  1 07:17:24 np0005464214 systemd[1]: Starting Rebuild Hardware Database...
Oct  1 07:17:24 np0005464214 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  1 07:17:24 np0005464214 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  1 07:17:24 np0005464214 systemd[1]: Starting Load/Save OS Random Seed...
Oct  1 07:17:24 np0005464214 systemd[1]: Starting Create System Users...
Oct  1 07:17:24 np0005464214 systemd[1]: Mounted FUSE Control File System.
Oct  1 07:17:24 np0005464214 systemd-journald[679]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Oct  1 07:17:24 np0005464214 systemd-journald[679]: Received client request to flush runtime journal.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Load/Save OS Random Seed.
Oct  1 07:17:24 np0005464214 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  1 07:17:24 np0005464214 kernel: ACPI: bus type drm_connector registered
Oct  1 07:17:24 np0005464214 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Load Kernel Module drm.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Coldplug All udev Devices.
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Create System Users.
Oct  1 07:17:24 np0005464214 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  1 07:17:24 np0005464214 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target Preparation for Local File Systems.
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target Local File Systems.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  1 07:17:25 np0005464214 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  1 07:17:25 np0005464214 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  1 07:17:25 np0005464214 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Automatic Boot Loader Update...
Oct  1 07:17:25 np0005464214 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Create Volatile Files and Directories...
Oct  1 07:17:25 np0005464214 bootctl[699]: Couldn't find EFI system partition, skipping.
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Automatic Boot Loader Update.
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Create Volatile Files and Directories.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Security Auditing Service...
Oct  1 07:17:25 np0005464214 systemd[1]: Starting RPC Bind...
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Rebuild Journal Catalog...
Oct  1 07:17:25 np0005464214 auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  1 07:17:25 np0005464214 auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Rebuild Journal Catalog.
Oct  1 07:17:25 np0005464214 systemd[1]: Started RPC Bind.
Oct  1 07:17:25 np0005464214 augenrules[710]: /sbin/augenrules: No change
Oct  1 07:17:25 np0005464214 augenrules[725]: No rules
Oct  1 07:17:25 np0005464214 augenrules[725]: enabled 1
Oct  1 07:17:25 np0005464214 augenrules[725]: failure 1
Oct  1 07:17:25 np0005464214 augenrules[725]: pid 705
Oct  1 07:17:25 np0005464214 augenrules[725]: rate_limit 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_limit 8192
Oct  1 07:17:25 np0005464214 augenrules[725]: lost 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog 3
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_wait_time 60000
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_wait_time_actual 0
Oct  1 07:17:25 np0005464214 augenrules[725]: enabled 1
Oct  1 07:17:25 np0005464214 augenrules[725]: failure 1
Oct  1 07:17:25 np0005464214 augenrules[725]: pid 705
Oct  1 07:17:25 np0005464214 augenrules[725]: rate_limit 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_limit 8192
Oct  1 07:17:25 np0005464214 augenrules[725]: lost 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_wait_time 60000
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_wait_time_actual 0
Oct  1 07:17:25 np0005464214 augenrules[725]: enabled 1
Oct  1 07:17:25 np0005464214 augenrules[725]: failure 1
Oct  1 07:17:25 np0005464214 augenrules[725]: pid 705
Oct  1 07:17:25 np0005464214 augenrules[725]: rate_limit 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_limit 8192
Oct  1 07:17:25 np0005464214 augenrules[725]: lost 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog 0
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_wait_time 60000
Oct  1 07:17:25 np0005464214 augenrules[725]: backlog_wait_time_actual 0
Oct  1 07:17:25 np0005464214 systemd[1]: Started Security Auditing Service.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Rebuild Hardware Database.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  1 07:17:25 np0005464214 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Oct  1 07:17:25 np0005464214 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Load Kernel Module configfs...
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Update is Completed...
Oct  1 07:17:25 np0005464214 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Load Kernel Module configfs.
Oct  1 07:17:25 np0005464214 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Update is Completed.
Oct  1 07:17:25 np0005464214 systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 07:17:25 np0005464214 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  1 07:17:25 np0005464214 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  1 07:17:25 np0005464214 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  1 07:17:25 np0005464214 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target System Initialization.
Oct  1 07:17:25 np0005464214 systemd[1]: Started dnf makecache --timer.
Oct  1 07:17:25 np0005464214 systemd[1]: Started Daily rotation of log files.
Oct  1 07:17:25 np0005464214 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target Timer Units.
Oct  1 07:17:25 np0005464214 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  1 07:17:25 np0005464214 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target Socket Units.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting D-Bus System Message Bus...
Oct  1 07:17:25 np0005464214 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  1 07:17:25 np0005464214 kernel: kvm_amd: TSC scaling supported
Oct  1 07:17:25 np0005464214 kernel: kvm_amd: Nested Virtualization enabled
Oct  1 07:17:25 np0005464214 kernel: kvm_amd: Nested Paging enabled
Oct  1 07:17:25 np0005464214 kernel: kvm_amd: LBR virtualization supported
Oct  1 07:17:25 np0005464214 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  1 07:17:25 np0005464214 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  1 07:17:25 np0005464214 systemd[1]: Started D-Bus System Message Bus.
Oct  1 07:17:25 np0005464214 dbus-broker-lau[784]: Ready
Oct  1 07:17:25 np0005464214 kernel: Console: switching to colour dummy device 80x25
Oct  1 07:17:25 np0005464214 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  1 07:17:25 np0005464214 kernel: [drm] features: -context_init
Oct  1 07:17:25 np0005464214 kernel: [drm] number of scanouts: 1
Oct  1 07:17:25 np0005464214 kernel: [drm] number of cap sets: 0
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target Basic System.
Oct  1 07:17:25 np0005464214 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  1 07:17:25 np0005464214 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  1 07:17:25 np0005464214 kernel: Console: switching to colour frame buffer device 128x48
Oct  1 07:17:25 np0005464214 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  1 07:17:25 np0005464214 systemd[1]: Starting NTP client/server...
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  1 07:17:25 np0005464214 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  1 07:17:25 np0005464214 systemd[1]: Starting IPv4 firewall with iptables...
Oct  1 07:17:25 np0005464214 systemd[1]: Started irqbalance daemon.
Oct  1 07:17:25 np0005464214 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  1 07:17:25 np0005464214 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 07:17:25 np0005464214 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 07:17:25 np0005464214 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target sshd-keygen.target.
Oct  1 07:17:25 np0005464214 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  1 07:17:25 np0005464214 systemd[1]: Reached target User and Group Name Lookups.
Oct  1 07:17:25 np0005464214 systemd[1]: Starting User Login Management...
Oct  1 07:17:25 np0005464214 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  1 07:17:25 np0005464214 chronyd[828]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  1 07:17:25 np0005464214 chronyd[828]: Loaded 0 symmetric keys
Oct  1 07:17:25 np0005464214 chronyd[828]: Using right/UTC timezone to obtain leap second data
Oct  1 07:17:25 np0005464214 chronyd[828]: Loaded seccomp filter (level 2)
Oct  1 07:17:25 np0005464214 systemd[1]: Started NTP client/server.
Oct  1 07:17:25 np0005464214 systemd-logind[818]: New seat seat0.
Oct  1 07:17:25 np0005464214 systemd-logind[818]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  1 07:17:25 np0005464214 systemd-logind[818]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  1 07:17:25 np0005464214 systemd[1]: Started User Login Management.
Oct  1 07:17:25 np0005464214 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  1 07:17:25 np0005464214 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  1 07:17:26 np0005464214 iptables.init[812]: iptables: Applying firewall rules: [  OK  ]
Oct  1 07:17:26 np0005464214 systemd[1]: Finished IPv4 firewall with iptables.
Oct  1 07:17:26 np0005464214 cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 01 Oct 2025 11:17:26 +0000. Up 7.30 seconds.
Oct  1 07:17:26 np0005464214 systemd[1]: run-cloud\x2dinit-tmp-tmpoevjftbr.mount: Deactivated successfully.
Oct  1 07:17:26 np0005464214 systemd[1]: Starting Hostname Service...
Oct  1 07:17:27 np0005464214 systemd[1]: Started Hostname Service.
Oct  1 07:17:27 np0005464214 systemd-hostnamed[856]: Hostname set to <np0005464214.novalocal> (static)
Oct  1 07:17:27 np0005464214 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  1 07:17:27 np0005464214 systemd[1]: Reached target Preparation for Network.
Oct  1 07:17:27 np0005464214 systemd[1]: Starting Network Manager...
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.2574] NetworkManager (version 1.54.1-1.el9) is starting... (boot:59648e32-2da2-4a47-989c-dbddfc6922f6)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.2581] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.2754] manager[0x55a7c5ae3080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.2820] hostname: hostname: using hostnamed
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.2820] hostname: static hostname changed from (none) to "np0005464214.novalocal"
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.2826] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3007] manager[0x55a7c5ae3080]: rfkill: Wi-Fi hardware radio set enabled
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3008] manager[0x55a7c5ae3080]: rfkill: WWAN hardware radio set enabled
Oct  1 07:17:27 np0005464214 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3137] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3138] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3139] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3140] manager: Networking is enabled by state file
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3143] settings: Loaded settings plugin: keyfile (internal)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3188] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3231] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3260] dhcp: init: Using DHCP client 'internal'
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3265] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3287] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3306] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3319] device (lo): Activation: starting connection 'lo' (71a0a298-c086-43ce-b223-7fae93260bdf)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3334] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3340] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3386] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3393] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 07:17:27 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3396] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3399] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3402] device (eth0): carrier: link connected
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3405] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3416] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3429] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 07:17:27 np0005464214 systemd[1]: Started Network Manager.
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3435] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3436] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3440] manager: NetworkManager state is now CONNECTING
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3442] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3454] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 07:17:27 np0005464214 systemd[1]: Reached target Network.
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3458] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 07:17:27 np0005464214 systemd[1]: Starting Network Manager Wait Online...
Oct  1 07:17:27 np0005464214 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  1 07:17:27 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3680] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3683] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 07:17:27 np0005464214 NetworkManager[860]: <info>  [1759317447.3695] device (lo): Activation: successful, device activated.
Oct  1 07:17:27 np0005464214 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  1 07:17:27 np0005464214 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  1 07:17:27 np0005464214 systemd[1]: Reached target NFS client services.
Oct  1 07:17:27 np0005464214 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  1 07:17:27 np0005464214 systemd[1]: Reached target Remote File Systems.
Oct  1 07:17:27 np0005464214 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1266] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1282] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1306] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1357] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1362] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1372] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1385] device (eth0): Activation: successful, device activated.
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1397] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  1 07:17:30 np0005464214 NetworkManager[860]: <info>  [1759317450.1405] manager: startup complete
Oct  1 07:17:30 np0005464214 systemd[1]: Finished Network Manager Wait Online.
Oct  1 07:17:30 np0005464214 systemd[1]: Starting Cloud-init: Network Stage...
Oct  1 07:17:30 np0005464214 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 01 Oct 2025 11:17:30 +0000. Up 11.17 seconds.
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.245         | 255.255.255.0 | global | fa:16:3e:d5:7e:d5 |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fed5:7ed5/64 |       .       |  link  | fa:16:3e:d5:7e:d5 |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Oct  1 07:17:30 np0005464214 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  1 07:17:31 np0005464214 cloud-init[924]: Generating public/private rsa key pair.
Oct  1 07:17:31 np0005464214 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  1 07:17:31 np0005464214 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  1 07:17:31 np0005464214 cloud-init[924]: The key fingerprint is:
Oct  1 07:17:31 np0005464214 cloud-init[924]: SHA256:pgrCOGtIpezjdKJA13ruYmzWKAoe0CGFYbphp2b4ifw root@np0005464214.novalocal
Oct  1 07:17:31 np0005464214 cloud-init[924]: The key's randomart image is:
Oct  1 07:17:31 np0005464214 cloud-init[924]: +---[RSA 3072]----+
Oct  1 07:17:31 np0005464214 cloud-init[924]: |.+.              |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |+.               |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |+...             |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |o=oo.            |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |==+. .  S        |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |O*...  o         |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |@*=oo..          |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |BO=Ooo           |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |Bo*E+o           |
Oct  1 07:17:31 np0005464214 cloud-init[924]: +----[SHA256]-----+
Oct  1 07:17:31 np0005464214 cloud-init[924]: Generating public/private ecdsa key pair.
Oct  1 07:17:31 np0005464214 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  1 07:17:31 np0005464214 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  1 07:17:31 np0005464214 cloud-init[924]: The key fingerprint is:
Oct  1 07:17:31 np0005464214 cloud-init[924]: SHA256:j6kliYfLomOaGvhSLYQUgFXJbclTNJRXm38fv4fWQvA root@np0005464214.novalocal
Oct  1 07:17:31 np0005464214 cloud-init[924]: The key's randomart image is:
Oct  1 07:17:31 np0005464214 cloud-init[924]: +---[ECDSA 256]---+
Oct  1 07:17:31 np0005464214 cloud-init[924]: |+ooo.+ ==. ..    |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |..  o * ...  o   |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |..   . . .  o    |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |. .         ..   |
Oct  1 07:17:31 np0005464214 cloud-init[924]: | . .    S    o...|
Oct  1 07:17:31 np0005464214 cloud-init[924]: |. o .o . +    E.+|
Oct  1 07:17:31 np0005464214 cloud-init[924]: |o. .o + + .  . oo|
Oct  1 07:17:31 np0005464214 cloud-init[924]: |o=.. o +      + +|
Oct  1 07:17:31 np0005464214 cloud-init[924]: |O+..o .      . o.|
Oct  1 07:17:31 np0005464214 cloud-init[924]: +----[SHA256]-----+
Oct  1 07:17:31 np0005464214 cloud-init[924]: Generating public/private ed25519 key pair.
Oct  1 07:17:31 np0005464214 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  1 07:17:31 np0005464214 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  1 07:17:31 np0005464214 cloud-init[924]: The key fingerprint is:
Oct  1 07:17:31 np0005464214 cloud-init[924]: SHA256:ee4z9OjMklqOKhDc7ejH0A06DGzQVKg0IWRWpgYYxWQ root@np0005464214.novalocal
Oct  1 07:17:31 np0005464214 cloud-init[924]: The key's randomart image is:
Oct  1 07:17:31 np0005464214 cloud-init[924]: +--[ED25519 256]--+
Oct  1 07:17:31 np0005464214 cloud-init[924]: |*%E=.            |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |B+*              |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |=+o .            |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |+= . o   .       |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |..o = o S .      |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |.  * o . o.      |
Oct  1 07:17:31 np0005464214 cloud-init[924]: | .. +   .o.o     |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |  .. o +o++ .    |
Oct  1 07:17:31 np0005464214 cloud-init[924]: |   .o.o..o=o     |
Oct  1 07:17:31 np0005464214 cloud-init[924]: +----[SHA256]-----+
Oct  1 07:17:31 np0005464214 sm-notify[1008]: Version 2.5.4 starting
Oct  1 07:17:31 np0005464214 systemd[1]: Finished Cloud-init: Network Stage.
Oct  1 07:17:31 np0005464214 systemd[1]: Reached target Cloud-config availability.
Oct  1 07:17:31 np0005464214 systemd[1]: Reached target Network is Online.
Oct  1 07:17:31 np0005464214 systemd[1]: Starting Cloud-init: Config Stage...
Oct  1 07:17:31 np0005464214 systemd[1]: Starting Notify NFS peers of a restart...
Oct  1 07:17:31 np0005464214 systemd[1]: Starting System Logging Service...
Oct  1 07:17:31 np0005464214 systemd[1]: Starting OpenSSH server daemon...
Oct  1 07:17:31 np0005464214 systemd[1]: Starting Permit User Sessions...
Oct  1 07:17:31 np0005464214 systemd[1]: Started Notify NFS peers of a restart.
Oct  1 07:17:31 np0005464214 systemd[1]: Finished Permit User Sessions.
Oct  1 07:17:31 np0005464214 systemd[1]: Started Command Scheduler.
Oct  1 07:17:31 np0005464214 systemd[1]: Started Getty on tty1.
Oct  1 07:17:31 np0005464214 systemd[1]: Started Serial Getty on ttyS0.
Oct  1 07:17:31 np0005464214 systemd[1]: Reached target Login Prompts.
Oct  1 07:17:31 np0005464214 systemd[1]: Started OpenSSH server daemon.
Oct  1 07:17:31 np0005464214 rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] start
Oct  1 07:17:31 np0005464214 systemd[1]: Started System Logging Service.
Oct  1 07:17:31 np0005464214 rsyslogd[1009]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  1 07:17:31 np0005464214 systemd[1]: Reached target Multi-User System.
Oct  1 07:17:31 np0005464214 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  1 07:17:32 np0005464214 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  1 07:17:32 np0005464214 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  1 07:17:32 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 07:17:32 np0005464214 cloud-init[1022]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 01 Oct 2025 11:17:32 +0000. Up 12.90 seconds.
Oct  1 07:17:32 np0005464214 systemd[1]: Finished Cloud-init: Config Stage.
Oct  1 07:17:32 np0005464214 systemd[1]: Starting Cloud-init: Final Stage...
Oct  1 07:17:32 np0005464214 cloud-init[1026]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 01 Oct 2025 11:17:32 +0000. Up 13.30 seconds.
Oct  1 07:17:32 np0005464214 cloud-init[1028]: #############################################################
Oct  1 07:17:32 np0005464214 cloud-init[1029]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  1 07:17:32 np0005464214 cloud-init[1031]: 256 SHA256:j6kliYfLomOaGvhSLYQUgFXJbclTNJRXm38fv4fWQvA root@np0005464214.novalocal (ECDSA)
Oct  1 07:17:32 np0005464214 cloud-init[1033]: 256 SHA256:ee4z9OjMklqOKhDc7ejH0A06DGzQVKg0IWRWpgYYxWQ root@np0005464214.novalocal (ED25519)
Oct  1 07:17:32 np0005464214 cloud-init[1036]: 3072 SHA256:pgrCOGtIpezjdKJA13ruYmzWKAoe0CGFYbphp2b4ifw root@np0005464214.novalocal (RSA)
Oct  1 07:17:32 np0005464214 cloud-init[1037]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  1 07:17:32 np0005464214 cloud-init[1038]: #############################################################
Oct  1 07:17:32 np0005464214 cloud-init[1026]: Cloud-init v. 24.4-7.el9 finished at Wed, 01 Oct 2025 11:17:32 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.56 seconds
Oct  1 07:17:32 np0005464214 systemd[1]: Finished Cloud-init: Final Stage.
Oct  1 07:17:32 np0005464214 systemd[1]: Reached target Cloud-init target.
Oct  1 07:17:32 np0005464214 systemd[1]: Startup finished in 1.659s (kernel) + 2.978s (initrd) + 9.009s (userspace) = 13.647s.
Oct  1 07:17:35 np0005464214 chronyd[828]: Selected source 54.39.196.172 (2.centos.pool.ntp.org)
Oct  1 07:17:35 np0005464214 chronyd[828]: System clock TAI offset set to 37 seconds
Oct  1 07:17:36 np0005464214 irqbalance[814]: Cannot change IRQ 25 affinity: Operation not permitted
Oct  1 07:17:36 np0005464214 irqbalance[814]: IRQ 25 affinity is now unmanaged
Oct  1 07:17:36 np0005464214 irqbalance[814]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  1 07:17:36 np0005464214 irqbalance[814]: IRQ 31 affinity is now unmanaged
Oct  1 07:17:36 np0005464214 irqbalance[814]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  1 07:17:36 np0005464214 irqbalance[814]: IRQ 28 affinity is now unmanaged
Oct  1 07:17:36 np0005464214 irqbalance[814]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  1 07:17:36 np0005464214 irqbalance[814]: IRQ 32 affinity is now unmanaged
Oct  1 07:17:36 np0005464214 irqbalance[814]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  1 07:17:36 np0005464214 irqbalance[814]: IRQ 30 affinity is now unmanaged
Oct  1 07:17:36 np0005464214 irqbalance[814]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  1 07:17:36 np0005464214 irqbalance[814]: IRQ 29 affinity is now unmanaged
Oct  1 07:17:40 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 07:17:57 np0005464214 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 07:29:54 np0005464214 systemd[1]: Starting dnf makecache...
Oct  1 07:29:54 np0005464214 dnf[1069]: Failed determining last makecache time.
Oct  1 07:29:55 np0005464214 dnf[1069]: CentOS Stream 9 - BaseOS                         47 kB/s | 6.7 kB     00:00
Oct  1 07:29:55 np0005464214 dnf[1069]: CentOS Stream 9 - BaseOS                        9.8 MB/s | 8.8 MB     00:00
Oct  1 07:29:57 np0005464214 dnf[1069]: CentOS Stream 9 - AppStream                      28 kB/s | 6.8 kB     00:00
Oct  1 07:29:58 np0005464214 dnf[1069]: CentOS Stream 9 - AppStream                      18 MB/s |  25 MB     00:01
Oct  1 07:30:04 np0005464214 dnf[1069]: CentOS Stream 9 - CRB                            26 kB/s | 6.6 kB     00:00
Oct  1 07:30:05 np0005464214 dnf[1069]: CentOS Stream 9 - CRB                           7.8 MB/s | 7.1 MB     00:00
Oct  1 07:30:07 np0005464214 dnf[1069]: CentOS Stream 9 - Extras packages                33 kB/s | 8.0 kB     00:00
Oct  1 07:30:08 np0005464214 dnf[1069]: Metadata cache created.
Oct  1 07:30:08 np0005464214 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  1 07:30:08 np0005464214 systemd[1]: Finished dnf makecache.
Oct  1 07:30:08 np0005464214 systemd[1]: dnf-makecache.service: Consumed 10.379s CPU time.
Oct  1 07:32:54 np0005464214 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  1 07:32:54 np0005464214 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  1 07:32:54 np0005464214 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  1 07:32:54 np0005464214 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  1 08:34:11 np0005464214 systemd[1]: Created slice User Slice of UID 1000.
Oct  1 08:34:11 np0005464214 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  1 08:34:11 np0005464214 systemd-logind[818]: New session 1 of user zuul.
Oct  1 08:34:11 np0005464214 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  1 08:34:11 np0005464214 systemd[1]: Starting User Manager for UID 1000...
Oct  1 08:34:11 np0005464214 systemd[1423]: Queued start job for default target Main User Target.
Oct  1 08:34:11 np0005464214 systemd[1423]: Created slice User Application Slice.
Oct  1 08:34:11 np0005464214 systemd[1423]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  1 08:34:11 np0005464214 systemd[1423]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 08:34:11 np0005464214 systemd[1423]: Reached target Paths.
Oct  1 08:34:11 np0005464214 systemd[1423]: Reached target Timers.
Oct  1 08:34:11 np0005464214 systemd[1423]: Starting D-Bus User Message Bus Socket...
Oct  1 08:34:11 np0005464214 systemd[1423]: Starting Create User's Volatile Files and Directories...
Oct  1 08:34:11 np0005464214 systemd[1423]: Listening on D-Bus User Message Bus Socket.
Oct  1 08:34:11 np0005464214 systemd[1423]: Finished Create User's Volatile Files and Directories.
Oct  1 08:34:11 np0005464214 systemd[1423]: Reached target Sockets.
Oct  1 08:34:11 np0005464214 systemd[1423]: Reached target Basic System.
Oct  1 08:34:11 np0005464214 systemd[1423]: Reached target Main User Target.
Oct  1 08:34:11 np0005464214 systemd[1423]: Startup finished in 153ms.
Oct  1 08:34:11 np0005464214 systemd[1]: Started User Manager for UID 1000.
Oct  1 08:34:11 np0005464214 systemd[1]: Started Session 1 of User zuul.
Oct  1 08:34:11 np0005464214 python3[1507]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:34:14 np0005464214 python3[1535]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:34:20 np0005464214 python3[1593]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:34:21 np0005464214 python3[1635]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  1 08:34:23 np0005464214 python3[1661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqYqivu2ogZ7lipmmiT9Ls2qUo8D5UWoisJfsIQ69aHSPwYxmf1rseaq0xAckbGKZk62qOZ4U8xHyFKyWuNOMzb//sbuv7hGSYQOzuXcOCy1OQ0NleH2CcjO9Z3DxZ4gOPVl2X951qNqZWS12QFAX6pf1kf9ZdDsap1Ec1wQTxL1cXcyLYTo7WrVDZA5hDsgezm0Mq9/H7HOG2q4IQ7/o7X5OyfGXJYhKOCc5zrID4IF0+y8WzkvbmCJ7JqtZP/nwS33jXuNdpg1Hsm3sRLc/ucxJ0eZzs5eJ00f5Jnbj9CqoDdCp6+9xN2j9nvjZkYjUextY6FF3N9r2V5xl2kXugl9dz4DA4vBoUi8BeWnh6thKtbOwB3KAUYpZnH6c/nFRjf1qmbrEwS7V2LiF51l9pfR4Z1HtnMG4xwQHvBNwSyL2YLCznEG5sfEmoDs0mMfcSuiSXOAiA8P2WeuiMmCT7jUkKO1UpmtqEJP9i4w1vEWqP1w+EGCdQtU7bS/bF0Rk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:23 np0005464214 python3[1685]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:24 np0005464214 python3[1784]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:24 np0005464214 python3[1855]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759322064.061439-207-28835398841381/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c96eb7ceeb9c4787898270928c891f09_id_rsa follow=False checksum=89d74924afce1297a5600cbdc4812d29d3f07317 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:25 np0005464214 python3[1978]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:25 np0005464214 python3[2049]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759322065.038209-240-228341817513442/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c96eb7ceeb9c4787898270928c891f09_id_rsa.pub follow=False checksum=75212e430220dfeb25fafa8dac3c0198acf09cda backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:27 np0005464214 python3[2098]: ansible-ping Invoked with data=pong
Oct  1 08:34:27 np0005464214 python3[2122]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:34:29 np0005464214 python3[2180]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  1 08:34:31 np0005464214 python3[2212]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:31 np0005464214 python3[2236]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:31 np0005464214 python3[2260]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:31 np0005464214 python3[2284]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:32 np0005464214 python3[2308]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:32 np0005464214 python3[2332]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:34 np0005464214 python3[2358]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:34 np0005464214 python3[2438]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:35 np0005464214 python3[2511]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322074.3750746-21-55360735765095/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:35 np0005464214 python3[2559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:36 np0005464214 python3[2583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:36 np0005464214 python3[2607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:36 np0005464214 python3[2631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:37 np0005464214 python3[2655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:37 np0005464214 python3[2679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:37 np0005464214 python3[2703]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:37 np0005464214 python3[2727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:38 np0005464214 python3[2751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:38 np0005464214 python3[2775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:38 np0005464214 python3[2801]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:38 np0005464214 python3[2825]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:39 np0005464214 python3[2849]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:39 np0005464214 python3[2873]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:39 np0005464214 python3[2897]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:40 np0005464214 python3[2921]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:40 np0005464214 python3[2945]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:40 np0005464214 python3[2969]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:40 np0005464214 python3[2993]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:41 np0005464214 python3[3017]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:41 np0005464214 python3[3041]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:41 np0005464214 python3[3065]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:41 np0005464214 python3[3089]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:42 np0005464214 python3[3113]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:42 np0005464214 python3[3137]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:42 np0005464214 python3[3161]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:34:45 np0005464214 python3[3187]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  1 08:34:45 np0005464214 systemd[1]: Starting Time & Date Service...
Oct  1 08:34:45 np0005464214 systemd[1]: Started Time & Date Service.
Oct  1 08:34:45 np0005464214 systemd-timedated[3189]: Changed time zone to 'UTC' (UTC).
Oct  1 08:34:46 np0005464214 irqbalance[814]: Cannot change IRQ 26 affinity: Operation not permitted
Oct  1 08:34:46 np0005464214 irqbalance[814]: IRQ 26 affinity is now unmanaged
Oct  1 08:34:46 np0005464214 python3[3218]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:46 np0005464214 python3[3294]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:47 np0005464214 python3[3365]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759322086.5854244-153-130122549945456/source _original_basename=tmpm92q9y6g follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:47 np0005464214 python3[3465]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:47 np0005464214 python3[3536]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759322087.4081943-183-237794728876795/source _original_basename=tmpq4ydf85d follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:48 np0005464214 python3[3638]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:49 np0005464214 python3[3711]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759322088.425242-231-243088015263010/source _original_basename=tmp08i8081e follow=False checksum=2bc1eb5288b1fcb7738d7061543c90ea94f5f91e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:49 np0005464214 python3[3761]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:34:49 np0005464214 python3[3787]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:34:50 np0005464214 python3[3867]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:34:50 np0005464214 python3[3940]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322090.2296793-273-122461588494005/source _original_basename=tmp7x9lrbly follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:34:51 np0005464214 python3[3991]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-9ea9-e8da-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:34:52 np0005464214 python3[4019]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-9ea9-e8da-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  1 08:34:53 np0005464214 python3[4048]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:35:12 np0005464214 python3[4076]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:35:16 np0005464214 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  1 08:35:43 np0005464214 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  1 08:35:43 np0005464214 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7464] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  1 08:35:43 np0005464214 systemd-udevd[4080]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7620] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7649] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7652] device (eth1): carrier: link connected
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7655] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7661] policy: auto-activating connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01)
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7664] device (eth1): Activation: starting connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01)
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7665] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7668] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7673] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 08:35:43 np0005464214 NetworkManager[860]: <info>  [1759322143.7677] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  1 08:35:44 np0005464214 python3[4108]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0426-e037-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:35:54 np0005464214 python3[4188]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:35:55 np0005464214 python3[4261]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759322154.4906135-102-68120778137278/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=8b82f67ed0e41d8d56e27dffdca8d2cb2902b0bf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:35:56 np0005464214 python3[4315]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 08:35:56 np0005464214 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  1 08:35:56 np0005464214 systemd[1]: Stopped Network Manager Wait Online.
Oct  1 08:35:56 np0005464214 systemd[1]: Stopping Network Manager Wait Online...
Oct  1 08:35:56 np0005464214 systemd[1]: Stopping Network Manager...
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0498] caught SIGTERM, shutting down normally.
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0505] dhcp4 (eth0): canceled DHCP transaction
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0506] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0506] dhcp4 (eth0): state changed no lease
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0508] manager: NetworkManager state is now CONNECTING
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0617] dhcp4 (eth1): canceled DHCP transaction
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0618] dhcp4 (eth1): state changed no lease
Oct  1 08:35:56 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 08:35:56 np0005464214 NetworkManager[860]: <info>  [1759322156.0657] exiting (success)
Oct  1 08:35:56 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 08:35:56 np0005464214 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  1 08:35:56 np0005464214 systemd[1]: Stopped Network Manager.
Oct  1 08:35:56 np0005464214 systemd[1]: NetworkManager.service: Consumed 26.865s CPU time, 9.9M memory peak.
Oct  1 08:35:56 np0005464214 systemd[1]: Starting Network Manager...
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1216] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:59648e32-2da2-4a47-989c-dbddfc6922f6)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1219] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1264] manager[0x5602c4630070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  1 08:35:56 np0005464214 systemd[1]: Starting Hostname Service...
Oct  1 08:35:56 np0005464214 systemd[1]: Started Hostname Service.
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1922] hostname: hostname: using hostnamed
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1923] hostname: static hostname changed from (none) to "np0005464214.novalocal"
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1927] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1930] manager[0x5602c4630070]: rfkill: Wi-Fi hardware radio set enabled
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1931] manager[0x5602c4630070]: rfkill: WWAN hardware radio set enabled
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1952] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1952] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1953] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1953] manager: Networking is enabled by state file
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1955] settings: Loaded settings plugin: keyfile (internal)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1958] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1979] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1987] dhcp: init: Using DHCP client 'internal'
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1989] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1993] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.1998] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2004] device (lo): Activation: starting connection 'lo' (71a0a298-c086-43ce-b223-7fae93260bdf)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2010] device (eth0): carrier: link connected
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2013] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2016] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2017] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2023] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2028] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2032] device (eth1): carrier: link connected
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2038] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2043] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01) (indicated)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2043] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2048] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2053] device (eth1): Activation: starting connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01)
Oct  1 08:35:56 np0005464214 systemd[1]: Started Network Manager.
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2059] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2062] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2064] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2065] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2067] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2069] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2071] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2073] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2076] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2082] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2084] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2090] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2092] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2107] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2108] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2112] device (lo): Activation: successful, device activated.
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2128] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2132] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2202] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 systemd[1]: Starting Network Manager Wait Online...
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2214] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2215] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2219] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2221] device (eth0): Activation: successful, device activated.
Oct  1 08:35:56 np0005464214 NetworkManager[4330]: <info>  [1759322156.2225] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  1 08:35:56 np0005464214 python3[4402]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0426-e037-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:36:06 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 08:36:26 np0005464214 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2690] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 08:36:41 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 08:36:41 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2925] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2927] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2935] device (eth1): Activation: successful, device activated.
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2941] manager: startup complete
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2943] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <warn>  [1759322201.2948] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.2956] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 systemd[1]: Finished Network Manager Wait Online.
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3057] dhcp4 (eth1): canceled DHCP transaction
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3058] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3058] dhcp4 (eth1): state changed no lease
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3072] policy: auto-activating connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3076] device (eth1): Activation: starting connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3077] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3079] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3086] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3093] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3127] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3129] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 08:36:41 np0005464214 NetworkManager[4330]: <info>  [1759322201.3134] device (eth1): Activation: successful, device activated.
Oct  1 08:36:51 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 08:36:53 np0005464214 python3[4514]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:36:54 np0005464214 python3[4587]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322213.4361048-267-72837742853929/source _original_basename=tmp3gi1m3_u follow=False checksum=657dff622f384eae175b3b6dde958f4cf16720ee backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:36:54 np0005464214 systemd[1423]: Starting Mark boot as successful...
Oct  1 08:36:54 np0005464214 systemd[1423]: Finished Mark boot as successful.
Oct  1 08:36:56 np0005464214 irqbalance[814]: Cannot change IRQ 27 affinity: Operation not permitted
Oct  1 08:36:56 np0005464214 irqbalance[814]: IRQ 27 affinity is now unmanaged
Oct  1 08:37:54 np0005464214 systemd-logind[818]: Session 1 logged out. Waiting for processes to exit.
Oct  1 08:39:54 np0005464214 systemd[1423]: Created slice User Background Tasks Slice.
Oct  1 08:39:54 np0005464214 systemd[1423]: Starting Cleanup of User's Temporary Files and Directories...
Oct  1 08:39:54 np0005464214 systemd[1423]: Finished Cleanup of User's Temporary Files and Directories.
Oct  1 08:40:40 np0005464214 systemd[1]: Starting dnf makecache...
Oct  1 08:40:40 np0005464214 dnf[4642]: Metadata cache refreshed recently.
Oct  1 08:40:40 np0005464214 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  1 08:40:40 np0005464214 systemd[1]: Finished dnf makecache.
Oct  1 08:42:11 np0005464214 systemd-logind[818]: New session 3 of user zuul.
Oct  1 08:42:11 np0005464214 systemd[1]: Started Session 3 of User zuul.
Oct  1 08:42:11 np0005464214 python3[4684]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-bc20-fbfc-000000001cea-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:42:11 np0005464214 python3[4713]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:42:11 np0005464214 python3[4739]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:42:12 np0005464214 python3[4765]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:42:12 np0005464214 python3[4791]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:42:12 np0005464214 python3[4819]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:42:12 np0005464214 python3[4819]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  1 08:42:13 np0005464214 python3[4845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 08:42:13 np0005464214 systemd[1]: Reloading.
Oct  1 08:42:13 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 08:42:15 np0005464214 python3[4901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  1 08:42:15 np0005464214 python3[4927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:42:16 np0005464214 python3[4955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:42:16 np0005464214 python3[4983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:42:16 np0005464214 python3[5011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:42:17 np0005464214 python3[5038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-bc20-fbfc-000000001cf0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:42:17 np0005464214 python3[5068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 08:42:19 np0005464214 systemd-logind[818]: Session 3 logged out. Waiting for processes to exit.
Oct  1 08:42:19 np0005464214 systemd[1]: session-3.scope: Deactivated successfully.
Oct  1 08:42:19 np0005464214 systemd[1]: session-3.scope: Consumed 3.494s CPU time.
Oct  1 08:42:19 np0005464214 systemd-logind[818]: Removed session 3.
Oct  1 08:42:21 np0005464214 systemd-logind[818]: New session 4 of user zuul.
Oct  1 08:42:21 np0005464214 systemd[1]: Started Session 4 of User zuul.
Oct  1 08:42:21 np0005464214 python3[5105]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 08:42:35 np0005464214 kernel: SELinux:  Converting 366 SID table entries...
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 08:42:35 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 08:42:44 np0005464214 kernel: SELinux:  Converting 366 SID table entries...
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 08:42:44 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 08:42:53 np0005464214 kernel: SELinux:  Converting 366 SID table entries...
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 08:42:53 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 08:42:54 np0005464214 setsebool[5167]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  1 08:42:54 np0005464214 setsebool[5167]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  1 08:43:05 np0005464214 kernel: SELinux:  Converting 369 SID table entries...
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 08:43:05 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 08:43:23 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  1 08:43:24 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 08:43:24 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 08:43:24 np0005464214 systemd[1]: Reloading.
Oct  1 08:43:24 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 08:43:24 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 08:43:25 np0005464214 systemd[1]: Starting PackageKit Daemon...
Oct  1 08:43:25 np0005464214 systemd[1]: Starting Authorization Manager...
Oct  1 08:43:25 np0005464214 polkitd[6665]: Started polkitd version 0.117
Oct  1 08:43:25 np0005464214 systemd[1]: Started Authorization Manager.
Oct  1 08:43:25 np0005464214 systemd[1]: Started PackageKit Daemon.
Oct  1 08:43:26 np0005464214 python3[7429]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-db01-fad1-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:43:27 np0005464214 kernel: evm: overlay not supported
Oct  1 08:43:27 np0005464214 systemd[1423]: Starting D-Bus User Message Bus...
Oct  1 08:43:27 np0005464214 dbus-broker-launch[8320]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  1 08:43:27 np0005464214 systemd[1423]: Started D-Bus User Message Bus.
Oct  1 08:43:27 np0005464214 dbus-broker-launch[8320]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  1 08:43:27 np0005464214 dbus-broker-lau[8320]: Ready
Oct  1 08:43:27 np0005464214 systemd[1423]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  1 08:43:27 np0005464214 systemd[1423]: Created slice Slice /user.
Oct  1 08:43:27 np0005464214 systemd[1423]: podman-8174.scope: unit configures an IP firewall, but not running as root.
Oct  1 08:43:27 np0005464214 systemd[1423]: (This warning is only shown for the first unit using IP firewalling.)
Oct  1 08:43:27 np0005464214 systemd[1423]: Started podman-8174.scope.
Oct  1 08:43:27 np0005464214 systemd[1423]: Started podman-pause-c7baf0b7.scope.
Oct  1 08:43:28 np0005464214 python3[9040]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.113:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.113:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:43:28 np0005464214 systemd[1]: session-4.scope: Deactivated successfully.
Oct  1 08:43:28 np0005464214 systemd[1]: session-4.scope: Consumed 58.997s CPU time.
Oct  1 08:43:28 np0005464214 systemd-logind[818]: Session 4 logged out. Waiting for processes to exit.
Oct  1 08:43:28 np0005464214 systemd-logind[818]: Removed session 4.
Oct  1 08:43:51 np0005464214 systemd-logind[818]: New session 5 of user zuul.
Oct  1 08:43:51 np0005464214 systemd[1]: Started Session 5 of User zuul.
Oct  1 08:43:52 np0005464214 python3[19325]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR4pTn4diMSkjwSG70fVeti9Lf6A4B/Bmz+ENT8b+tD8PK6ZGURxDMk3ySuFdE0LGwIJtSh3Ou06MeEB6m4ODI= zuul@np0005464222.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:43:52 np0005464214 python3[19489]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR4pTn4diMSkjwSG70fVeti9Lf6A4B/Bmz+ENT8b+tD8PK6ZGURxDMk3ySuFdE0LGwIJtSh3Ou06MeEB6m4ODI= zuul@np0005464222.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:43:53 np0005464214 python3[19807]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005464214.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  1 08:43:53 np0005464214 python3[20036]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR4pTn4diMSkjwSG70fVeti9Lf6A4B/Bmz+ENT8b+tD8PK6ZGURxDMk3ySuFdE0LGwIJtSh3Ou06MeEB6m4ODI= zuul@np0005464222.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  1 08:43:53 np0005464214 python3[20303]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:43:54 np0005464214 python3[20608]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322633.7521372-135-217831172634294/source _original_basename=tmpzf8apmpw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:43:55 np0005464214 python3[20939]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  1 08:43:55 np0005464214 systemd[1]: Starting Hostname Service...
Oct  1 08:43:55 np0005464214 systemd[1]: Started Hostname Service.
Oct  1 08:43:56 np0005464214 systemd-hostnamed[21072]: Changed pretty hostname to 'compute-0'
Oct  1 08:43:56 np0005464214 systemd-hostnamed[21072]: Hostname set to <compute-0> (static)
Oct  1 08:43:56 np0005464214 NetworkManager[4330]: <info>  [1759322636.5176] hostname: static hostname changed from "np0005464214.novalocal" to "compute-0"
Oct  1 08:43:56 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 08:43:56 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 08:43:56 np0005464214 systemd[1]: session-5.scope: Deactivated successfully.
Oct  1 08:43:56 np0005464214 systemd[1]: session-5.scope: Consumed 2.240s CPU time.
Oct  1 08:43:56 np0005464214 systemd-logind[818]: Session 5 logged out. Waiting for processes to exit.
Oct  1 08:43:56 np0005464214 systemd-logind[818]: Removed session 5.
Oct  1 08:44:06 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 08:44:10 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 08:44:10 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 08:44:10 np0005464214 systemd[1]: man-db-cache-update.service: Consumed 55.619s CPU time.
Oct  1 08:44:10 np0005464214 systemd[1]: run-r5a754d2d04604b12a8c29cf2632f439c.service: Deactivated successfully.
Oct  1 08:44:26 np0005464214 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 08:47:21 np0005464214 systemd-logind[818]: New session 6 of user zuul.
Oct  1 08:47:21 np0005464214 systemd[1]: Started Session 6 of User zuul.
Oct  1 08:47:22 np0005464214 python3[27059]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:47:23 np0005464214 python3[27175]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:24 np0005464214 python3[27248]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:24 np0005464214 python3[27274]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:25 np0005464214 python3[27347]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:25 np0005464214 python3[27373]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:25 np0005464214 python3[27446]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:26 np0005464214 python3[27472]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:26 np0005464214 python3[27545]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:26 np0005464214 python3[27571]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:27 np0005464214 python3[27644]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:27 np0005464214 python3[27670]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:27 np0005464214 python3[27743]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:28 np0005464214 python3[27769]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 08:47:28 np0005464214 python3[27842]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:47:42 np0005464214 python3[27902]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:48:30 np0005464214 systemd[1]: packagekit.service: Deactivated successfully.
Oct  1 08:52:42 np0005464214 systemd[1]: session-6.scope: Deactivated successfully.
Oct  1 08:52:42 np0005464214 systemd[1]: session-6.scope: Consumed 5.352s CPU time.
Oct  1 08:52:42 np0005464214 systemd-logind[818]: Session 6 logged out. Waiting for processes to exit.
Oct  1 08:52:42 np0005464214 systemd-logind[818]: Removed session 6.
Oct  1 08:58:46 np0005464214 systemd-logind[818]: New session 7 of user zuul.
Oct  1 08:58:46 np0005464214 systemd[1]: Started Session 7 of User zuul.
Oct  1 08:58:48 np0005464214 python3.9[28138]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:58:49 np0005464214 python3.9[28319]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:58:56 np0005464214 systemd[1]: session-7.scope: Deactivated successfully.
Oct  1 08:58:56 np0005464214 systemd[1]: session-7.scope: Consumed 7.517s CPU time.
Oct  1 08:58:56 np0005464214 systemd-logind[818]: Session 7 logged out. Waiting for processes to exit.
Oct  1 08:58:56 np0005464214 systemd-logind[818]: Removed session 7.
Oct  1 08:59:17 np0005464214 systemd-logind[818]: New session 8 of user zuul.
Oct  1 08:59:17 np0005464214 systemd[1]: Started Session 8 of User zuul.
Oct  1 08:59:18 np0005464214 python3.9[28534]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  1 08:59:19 np0005464214 python3.9[28708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:59:20 np0005464214 python3.9[28860]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 08:59:20 np0005464214 python3.9[29013]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 08:59:21 np0005464214 python3.9[29165]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:59:22 np0005464214 python3.9[29317]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 08:59:23 np0005464214 python3.9[29440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323561.9151914-73-240031728466491/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:59:23 np0005464214 python3.9[29592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:59:24 np0005464214 python3.9[29748]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 08:59:25 np0005464214 python3.9[29898]: ansible-ansible.builtin.service_facts Invoked
Oct  1 08:59:30 np0005464214 python3.9[30156]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 08:59:31 np0005464214 python3.9[30306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:59:32 np0005464214 python3.9[30460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 08:59:33 np0005464214 python3.9[30618]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 08:59:34 np0005464214 python3.9[30702]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:00:15 np0005464214 systemd[1]: Reloading.
Oct  1 09:00:15 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:00:16 np0005464214 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  1 09:00:16 np0005464214 systemd[1]: Reloading.
Oct  1 09:00:16 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:00:16 np0005464214 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  1 09:00:16 np0005464214 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  1 09:00:16 np0005464214 systemd[1]: Reloading.
Oct  1 09:00:16 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:00:16 np0005464214 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  1 09:00:17 np0005464214 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct  1 09:00:17 np0005464214 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct  1 09:01:20 np0005464214 kernel: SELinux:  Converting 2714 SID table entries...
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 09:01:20 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 09:01:21 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  1 09:01:21 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:01:21 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:01:21 np0005464214 systemd[1]: Reloading.
Oct  1 09:01:21 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:01:21 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:01:21 np0005464214 systemd[1]: Starting PackageKit Daemon...
Oct  1 09:01:21 np0005464214 systemd[1]: Started PackageKit Daemon.
Oct  1 09:01:22 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:01:22 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:01:22 np0005464214 systemd[1]: man-db-cache-update.service: Consumed 1.131s CPU time.
Oct  1 09:01:22 np0005464214 systemd[1]: run-r2f79dcdcc1d24daf8a6368bd19999ca0.service: Deactivated successfully.
Oct  1 09:01:22 np0005464214 python3.9[32241]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:01:24 np0005464214 python3.9[32522]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  1 09:01:25 np0005464214 python3.9[32674]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  1 09:01:28 np0005464214 python3.9[32827]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:01:29 np0005464214 python3.9[32979]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  1 09:01:30 np0005464214 python3.9[33131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:01:31 np0005464214 python3.9[33283]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:01:31 np0005464214 python3.9[33406]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323690.51836-227-57260806918456/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:01:34 np0005464214 python3.9[33558]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  1 09:01:35 np0005464214 python3.9[33712]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 09:01:35 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:01:36 np0005464214 python3.9[33871]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 09:01:37 np0005464214 python3.9[34031]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  1 09:01:38 np0005464214 python3.9[34184]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 09:01:39 np0005464214 python3.9[34342]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  1 09:01:40 np0005464214 python3.9[34494]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:01:42 np0005464214 python3.9[34647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:01:42 np0005464214 python3.9[34799]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:01:43 np0005464214 python3.9[34922]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323702.4389317-322-104338917592993/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:01:44 np0005464214 python3.9[35076]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:01:44 np0005464214 systemd[1]: Starting Load Kernel Modules...
Oct  1 09:01:44 np0005464214 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  1 09:01:44 np0005464214 kernel: Bridge firewalling registered
Oct  1 09:01:44 np0005464214 systemd-modules-load[35080]: Inserted module 'br_netfilter'
Oct  1 09:01:44 np0005464214 systemd[1]: Finished Load Kernel Modules.
Oct  1 09:01:45 np0005464214 python3.9[35235]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:01:45 np0005464214 python3.9[35358]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323704.8960533-345-37116191046091/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:01:46 np0005464214 python3.9[35510]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:01:50 np0005464214 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct  1 09:01:50 np0005464214 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct  1 09:01:50 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:01:50 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:01:50 np0005464214 systemd[1]: Reloading.
Oct  1 09:01:50 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:01:50 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:01:52 np0005464214 python3.9[37067]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:01:53 np0005464214 python3.9[38016]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  1 09:01:53 np0005464214 python3.9[38781]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:01:54 np0005464214 python3.9[39637]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:01:54 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:01:54 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:01:54 np0005464214 systemd[1]: man-db-cache-update.service: Consumed 4.385s CPU time.
Oct  1 09:01:54 np0005464214 systemd[1]: run-r3be6f599e91444c8abb7ea88fc75c5d1.service: Deactivated successfully.
Oct  1 09:01:54 np0005464214 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  1 09:01:54 np0005464214 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  1 09:01:55 np0005464214 python3.9[40056]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:01:55 np0005464214 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  1 09:01:55 np0005464214 systemd[1]: tuned.service: Deactivated successfully.
Oct  1 09:01:55 np0005464214 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  1 09:01:55 np0005464214 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  1 09:01:55 np0005464214 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  1 09:01:56 np0005464214 python3.9[40220]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  1 09:01:59 np0005464214 python3.9[40372]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:01:59 np0005464214 systemd[1]: Reloading.
Oct  1 09:01:59 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:02:00 np0005464214 python3.9[40561]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:02:00 np0005464214 systemd[1]: Reloading.
Oct  1 09:02:00 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:02:01 np0005464214 python3.9[40750]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:02:01 np0005464214 python3.9[40903]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:02:01 np0005464214 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  1 09:02:02 np0005464214 python3.9[41057]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:02:04 np0005464214 python3.9[41219]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:02:05 np0005464214 python3.9[41372]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:02:05 np0005464214 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  1 09:02:05 np0005464214 systemd[1]: Stopped Apply Kernel Variables.
Oct  1 09:02:05 np0005464214 systemd[1]: Stopping Apply Kernel Variables...
Oct  1 09:02:05 np0005464214 systemd[1]: Starting Apply Kernel Variables...
Oct  1 09:02:05 np0005464214 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  1 09:02:05 np0005464214 systemd[1]: Finished Apply Kernel Variables.
Oct  1 09:02:05 np0005464214 systemd[1]: session-8.scope: Deactivated successfully.
Oct  1 09:02:05 np0005464214 systemd[1]: session-8.scope: Consumed 2min 9.398s CPU time.
Oct  1 09:02:05 np0005464214 systemd-logind[818]: Session 8 logged out. Waiting for processes to exit.
Oct  1 09:02:05 np0005464214 systemd-logind[818]: Removed session 8.
Oct  1 09:02:11 np0005464214 systemd-logind[818]: New session 9 of user zuul.
Oct  1 09:02:11 np0005464214 systemd[1]: Started Session 9 of User zuul.
Oct  1 09:02:12 np0005464214 python3.9[41558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:02:13 np0005464214 python3.9[41714]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  1 09:02:14 np0005464214 python3.9[41867]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 09:02:15 np0005464214 python3.9[42025]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 09:02:16 np0005464214 python3.9[42185]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:02:17 np0005464214 python3.9[42269]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 09:02:19 np0005464214 python3.9[42432]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:02:30 np0005464214 kernel: SELinux:  Converting 2724 SID table entries...
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 09:02:30 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 09:02:31 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  1 09:02:31 np0005464214 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  1 09:02:32 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:02:32 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:02:32 np0005464214 systemd[1]: Reloading.
Oct  1 09:02:32 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:02:32 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:02:32 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:02:33 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:02:33 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:02:33 np0005464214 systemd[1]: run-r55f3e64d89c948958c366809268950d5.service: Deactivated successfully.
Oct  1 09:02:34 np0005464214 python3.9[43536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:02:34 np0005464214 systemd[1]: Reloading.
Oct  1 09:02:34 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:02:34 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:02:34 np0005464214 systemd[1]: Starting Open vSwitch Database Unit...
Oct  1 09:02:34 np0005464214 chown[43577]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  1 09:02:34 np0005464214 ovs-ctl[43582]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  1 09:02:34 np0005464214 ovs-ctl[43582]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  1 09:02:35 np0005464214 ovs-ctl[43582]: Starting ovsdb-server [  OK  ]
Oct  1 09:02:35 np0005464214 ovs-vsctl[43631]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  1 09:02:35 np0005464214 ovs-vsctl[43650]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7280030e-2ba6-406c-9fae-f8284a927c47\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  1 09:02:35 np0005464214 ovs-ctl[43582]: Configuring Open vSwitch system IDs [  OK  ]
Oct  1 09:02:35 np0005464214 ovs-ctl[43582]: Enabling remote OVSDB managers [  OK  ]
Oct  1 09:02:35 np0005464214 systemd[1]: Started Open vSwitch Database Unit.
Oct  1 09:02:35 np0005464214 ovs-vsctl[43656]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  1 09:02:35 np0005464214 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  1 09:02:35 np0005464214 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  1 09:02:35 np0005464214 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  1 09:02:35 np0005464214 kernel: openvswitch: Open vSwitch switching datapath
Oct  1 09:02:35 np0005464214 ovs-ctl[43700]: Inserting openvswitch module [  OK  ]
Oct  1 09:02:35 np0005464214 ovs-ctl[43669]: Starting ovs-vswitchd [  OK  ]
Oct  1 09:02:35 np0005464214 ovs-vsctl[43718]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  1 09:02:35 np0005464214 ovs-ctl[43669]: Enabling remote OVSDB managers [  OK  ]
Oct  1 09:02:35 np0005464214 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  1 09:02:35 np0005464214 systemd[1]: Starting Open vSwitch...
Oct  1 09:02:35 np0005464214 systemd[1]: Finished Open vSwitch.
Oct  1 09:02:36 np0005464214 python3.9[43870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:02:37 np0005464214 python3.9[44022]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  1 09:02:38 np0005464214 kernel: SELinux:  Converting 2738 SID table entries...
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 09:02:38 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 09:02:39 np0005464214 python3.9[44181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:02:40 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  1 09:02:40 np0005464214 python3.9[44339]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:02:42 np0005464214 python3.9[44492]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:02:43 np0005464214 python3.9[44779]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 09:02:44 np0005464214 python3.9[44929]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:02:45 np0005464214 python3.9[45083]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:02:47 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:02:47 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:02:47 np0005464214 systemd[1]: Reloading.
Oct  1 09:02:47 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:02:47 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:02:47 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:02:47 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:02:47 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:02:47 np0005464214 systemd[1]: run-r191bc22dc7f04e12a3a0318a0a9d1d33.service: Deactivated successfully.
Oct  1 09:02:48 np0005464214 python3.9[45400]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:02:48 np0005464214 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  1 09:02:48 np0005464214 systemd[1]: Stopped Network Manager Wait Online.
Oct  1 09:02:48 np0005464214 systemd[1]: Stopping Network Manager Wait Online...
Oct  1 09:02:48 np0005464214 systemd[1]: Stopping Network Manager...
Oct  1 09:02:48 np0005464214 NetworkManager[4330]: <info>  [1759323768.5105] caught SIGTERM, shutting down normally.
Oct  1 09:02:48 np0005464214 NetworkManager[4330]: <info>  [1759323768.5133] dhcp4 (eth0): canceled DHCP transaction
Oct  1 09:02:48 np0005464214 NetworkManager[4330]: <info>  [1759323768.5134] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 09:02:48 np0005464214 NetworkManager[4330]: <info>  [1759323768.5134] dhcp4 (eth0): state changed no lease
Oct  1 09:02:48 np0005464214 NetworkManager[4330]: <info>  [1759323768.5139] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 09:02:48 np0005464214 NetworkManager[4330]: <info>  [1759323768.5239] exiting (success)
Oct  1 09:02:48 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 09:02:48 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 09:02:48 np0005464214 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  1 09:02:48 np0005464214 systemd[1]: Stopped Network Manager.
Oct  1 09:02:48 np0005464214 systemd[1]: NetworkManager.service: Consumed 9.698s CPU time, 4.1M memory peak, read 0B from disk, written 22.5K to disk.
Oct  1 09:02:48 np0005464214 systemd[1]: Starting Network Manager...
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.5894] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:59648e32-2da2-4a47-989c-dbddfc6922f6)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.5897] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.5958] manager[0x55e97e42a090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  1 09:02:48 np0005464214 systemd[1]: Starting Hostname Service...
Oct  1 09:02:48 np0005464214 systemd[1]: Started Hostname Service.
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6704] hostname: hostname: using hostnamed
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6708] hostname: static hostname changed from (none) to "compute-0"
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6715] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6721] manager[0x55e97e42a090]: rfkill: Wi-Fi hardware radio set enabled
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6722] manager[0x55e97e42a090]: rfkill: WWAN hardware radio set enabled
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6755] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6771] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6772] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6773] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6774] manager: Networking is enabled by state file
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6777] settings: Loaded settings plugin: keyfile (internal)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6783] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6822] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6836] dhcp: init: Using DHCP client 'internal'
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6840] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6851] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6860] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6874] device (lo): Activation: starting connection 'lo' (71a0a298-c086-43ce-b223-7fae93260bdf)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6887] device (eth0): carrier: link connected
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6895] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6905] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6905] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6918] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6930] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6941] device (eth1): carrier: link connected
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6948] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6955] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c) (indicated)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6956] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6964] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6974] device (eth1): Activation: starting connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct  1 09:02:48 np0005464214 systemd[1]: Started Network Manager.
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6983] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6996] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.6999] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7002] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7007] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7012] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7014] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7018] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7024] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7032] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7035] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7058] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7075] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7085] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7089] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7095] device (lo): Activation: successful, device activated.
Oct  1 09:02:48 np0005464214 systemd[1]: Starting Network Manager Wait Online...
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7101] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7108] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7171] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7175] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7181] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7184] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7188] device (eth1): Activation: successful, device activated.
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7200] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7202] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7205] manager: NetworkManager state is now CONNECTED_SITE
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7208] device (eth0): Activation: successful, device activated.
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7213] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  1 09:02:48 np0005464214 NetworkManager[45411]: <info>  [1759323768.7215] manager: startup complete
Oct  1 09:02:48 np0005464214 systemd[1]: Finished Network Manager Wait Online.
Oct  1 09:02:49 np0005464214 python3.9[45626]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:02:54 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:02:54 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:02:54 np0005464214 systemd[1]: Reloading.
Oct  1 09:02:54 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:02:54 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:02:54 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:02:55 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:02:55 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:02:55 np0005464214 systemd[1]: run-r4393e0c1255343478f8d0f9fd380e944.service: Deactivated successfully.
Oct  1 09:02:56 np0005464214 python3.9[46089]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:02:57 np0005464214 python3.9[46241]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:02:57 np0005464214 python3.9[46395]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:02:58 np0005464214 python3.9[46547]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:02:58 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 09:02:59 np0005464214 python3.9[46699]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:02:59 np0005464214 python3.9[46851]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:00 np0005464214 python3.9[47003]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:03:01 np0005464214 python3.9[47126]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323780.0926003-229-33731058889208/.source _original_basename=.kz3ajbh2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:02 np0005464214 python3.9[47278]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:03 np0005464214 python3.9[47430]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  1 09:03:03 np0005464214 python3.9[47582]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:06 np0005464214 python3.9[48009]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  1 09:03:07 np0005464214 ansible-async_wrapper.py[48184]: Invoked with j566808765024 300 /home/zuul/.ansible/tmp/ansible-tmp-1759323786.5309014-295-31303534030536/AnsiballZ_edpm_os_net_config.py _
Oct  1 09:03:07 np0005464214 ansible-async_wrapper.py[48187]: Starting module and watcher
Oct  1 09:03:07 np0005464214 ansible-async_wrapper.py[48187]: Start watching 48188 (300)
Oct  1 09:03:07 np0005464214 ansible-async_wrapper.py[48188]: Start module (48188)
Oct  1 09:03:07 np0005464214 ansible-async_wrapper.py[48184]: Return async_wrapper task started.
Oct  1 09:03:07 np0005464214 python3.9[48189]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  1 09:03:08 np0005464214 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  1 09:03:08 np0005464214 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  1 09:03:08 np0005464214 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  1 09:03:08 np0005464214 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  1 09:03:08 np0005464214 kernel: cfg80211: failed to load regulatory.db
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6237] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6248] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6631] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6632] audit: op="connection-add" uuid="576c0d87-205d-46e0-8925-225d5c4068f9" name="br-ex-br" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6644] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6646] audit: op="connection-add" uuid="7340bd2a-abdc-4ff8-9f99-ba0bb26a4521" name="br-ex-port" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6655] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6657] audit: op="connection-add" uuid="743cbaea-84dd-47fc-a646-eef99edaafb5" name="eth1-port" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6666] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6668] audit: op="connection-add" uuid="071ca334-5b58-407b-9724-7af69cb2805e" name="vlan20-port" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6676] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6678] audit: op="connection-add" uuid="9f1b328e-d7e5-43f4-8310-a21d774abf3f" name="vlan21-port" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6686] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6689] audit: op="connection-add" uuid="c99f998e-eb1d-43fd-8389-67258c6b002f" name="vlan22-port" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6697] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6698] audit: op="connection-add" uuid="311c32b7-17ce-4024-a271-1b159ae741ec" name="vlan23-port" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6714] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6726] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6728] audit: op="connection-add" uuid="40b1467f-e7fd-43dc-9b7f-ad129c590d00" name="br-ex-if" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6752] audit: op="connection-update" uuid="55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c" name="ci-private-network" args="connection.slave-type,connection.master,connection.port-type,connection.controller,connection.timestamp,ipv6.addresses,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ovs-external-ids.data,ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.routes" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6764] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6765] audit: op="connection-add" uuid="937e280a-d092-44bf-a162-4873dbffa638" name="vlan20-if" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6777] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6779] audit: op="connection-add" uuid="9b828b33-f4bb-4f80-9a32-10eb798ec1b4" name="vlan21-if" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6792] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6794] audit: op="connection-add" uuid="a931c5cd-1887-4874-909e-f77dd691887a" name="vlan22-if" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6806] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6807] audit: op="connection-add" uuid="fceb399c-689d-44a7-814e-0e134949fe2b" name="vlan23-if" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6816] audit: op="connection-delete" uuid="5676b0c3-8d77-3352-b8fd-5d58f5ca7d01" name="Wired connection 1" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6825] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6835] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6841] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (576c0d87-205d-46e0-8925-225d5c4068f9)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6842] audit: op="connection-activate" uuid="576c0d87-205d-46e0-8925-225d5c4068f9" name="br-ex-br" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6843] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6848] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6852] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (7340bd2a-abdc-4ff8-9f99-ba0bb26a4521)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6853] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6858] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6862] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (743cbaea-84dd-47fc-a646-eef99edaafb5)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6864] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6870] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6874] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (071ca334-5b58-407b-9724-7af69cb2805e)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6876] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6883] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6886] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9f1b328e-d7e5-43f4-8310-a21d774abf3f)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6888] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6893] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6896] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c99f998e-eb1d-43fd-8389-67258c6b002f)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6898] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6903] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6906] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (311c32b7-17ce-4024-a271-1b159ae741ec)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6907] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6909] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6911] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6916] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6920] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6923] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (40b1467f-e7fd-43dc-9b7f-ad129c590d00)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6924] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6926] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6928] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6929] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6931] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6939] device (eth1): disconnecting for new activation request.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6940] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6942] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6944] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6945] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6947] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6951] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6955] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (937e280a-d092-44bf-a162-4873dbffa638)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6956] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6958] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6960] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6961] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6964] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6967] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6971] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (9b828b33-f4bb-4f80-9a32-10eb798ec1b4)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6972] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6974] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6976] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6978] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6980] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6984] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6987] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (a931c5cd-1887-4874-909e-f77dd691887a)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6989] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6991] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6993] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6994] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.6996] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7001] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7004] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (fceb399c-689d-44a7-814e-0e134949fe2b)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7005] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7008] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7010] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7012] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7013] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7022] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7024] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7027] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7029] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7035] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7038] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7041] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7044] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7046] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7050] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7054] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 kernel: ovs-system: entered promiscuous mode
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7058] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7059] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7063] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7066] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 systemd-udevd[48196]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7069] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7070] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7075] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 kernel: Timeout policy base is empty
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7079] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7082] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7084] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7089] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7092] dhcp4 (eth0): canceled DHCP transaction
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7092] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7092] dhcp4 (eth0): state changed no lease
Oct  1 09:03:09 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7093] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7103] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7107] audit: op="device-reapply" interface="eth1" ifindex=3 pid=48190 uid=0 result="fail" reason="Device is not activated"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7142] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7149] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7155] device (eth1): disconnecting for new activation request.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7156] audit: op="connection-activate" uuid="55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c" name="ci-private-network" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7157] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7161] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7164] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7227] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7234] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7346] device (eth1): Activation: starting connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7350] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7356] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7359] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 kernel: br-ex: entered promiscuous mode
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7367] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7371] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7375] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7376] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7377] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7379] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7380] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7381] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7393] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7398] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7400] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7403] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7405] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7408] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7411] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7413] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7415] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7418] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7420] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7422] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7425] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7429] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7434] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 kernel: vlan22: entered promiscuous mode
Oct  1 09:03:09 np0005464214 systemd-udevd[48194]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:03:09 np0005464214 kernel: vlan21: entered promiscuous mode
Oct  1 09:03:09 np0005464214 systemd-udevd[48195]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7495] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7497] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7514] device (eth1): Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7523] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7533] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 kernel: vlan20: entered promiscuous mode
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7563] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7564] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7568] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 kernel: vlan23: entered promiscuous mode
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7613] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7618] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7633] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7638] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7680] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7681] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7682] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7686] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7690] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7702] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7716] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7719] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7738] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7746] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7792] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7793] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7794] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7801] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7808] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  1 09:03:09 np0005464214 NetworkManager[45411]: <info>  [1759323789.7814] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  1 09:03:10 np0005464214 NetworkManager[45411]: <info>  [1759323790.9132] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.0998] checkpoint[0x55e97e400950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.1000] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.3771] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.3779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.5466] audit: op="networking-control" arg="global-dns-configuration" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.5497] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.5521] audit: op="networking-control" arg="global-dns-configuration" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.5685] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 python3.9[48552]: ansible-ansible.legacy.async_status Invoked with jid=j566808765024.48184 mode=status _async_dir=/root/.ansible_async
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.7117] checkpoint[0x55e97e400a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  1 09:03:11 np0005464214 NetworkManager[45411]: <info>  [1759323791.7121] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct  1 09:03:11 np0005464214 ansible-async_wrapper.py[48188]: Module complete (48188)
Oct  1 09:03:12 np0005464214 ansible-async_wrapper.py[48187]: Done in kid B.
Oct  1 09:03:15 np0005464214 python3.9[48659]: ansible-ansible.legacy.async_status Invoked with jid=j566808765024.48184 mode=status _async_dir=/root/.ansible_async
Oct  1 09:03:15 np0005464214 python3.9[48758]: ansible-ansible.legacy.async_status Invoked with jid=j566808765024.48184 mode=cleanup _async_dir=/root/.ansible_async
Oct  1 09:03:16 np0005464214 python3.9[48910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:03:17 np0005464214 python3.9[49033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323795.9362414-322-232223844582217/.source.returncode _original_basename=.5k934zip follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:17 np0005464214 python3.9[49185]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:03:18 np0005464214 python3.9[49308]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323797.230515-338-126594682873439/.source.cfg _original_basename=.w8moerjk follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:18 np0005464214 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  1 09:03:19 np0005464214 python3.9[49464]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:03:19 np0005464214 systemd[1]: Reloading Network Manager...
Oct  1 09:03:19 np0005464214 NetworkManager[45411]: <info>  [1759323799.2860] audit: op="reload" arg="0" pid=49468 uid=0 result="success"
Oct  1 09:03:19 np0005464214 NetworkManager[45411]: <info>  [1759323799.2866] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  1 09:03:19 np0005464214 systemd[1]: Reloaded Network Manager.
Oct  1 09:03:19 np0005464214 systemd[1]: session-9.scope: Deactivated successfully.
Oct  1 09:03:19 np0005464214 systemd[1]: session-9.scope: Consumed 50.047s CPU time.
Oct  1 09:03:19 np0005464214 systemd-logind[818]: Session 9 logged out. Waiting for processes to exit.
Oct  1 09:03:19 np0005464214 systemd-logind[818]: Removed session 9.
Oct  1 09:03:24 np0005464214 systemd-logind[818]: New session 10 of user zuul.
Oct  1 09:03:24 np0005464214 systemd[1]: Started Session 10 of User zuul.
Oct  1 09:03:25 np0005464214 python3.9[49653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:03:26 np0005464214 python3.9[49807]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:03:28 np0005464214 python3.9[50001]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:03:28 np0005464214 systemd-logind[818]: Session 10 logged out. Waiting for processes to exit.
Oct  1 09:03:28 np0005464214 systemd[1]: session-10.scope: Deactivated successfully.
Oct  1 09:03:28 np0005464214 systemd[1]: session-10.scope: Consumed 2.530s CPU time.
Oct  1 09:03:28 np0005464214 systemd-logind[818]: Removed session 10.
Oct  1 09:03:29 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 09:03:33 np0005464214 systemd-logind[818]: New session 11 of user zuul.
Oct  1 09:03:33 np0005464214 systemd[1]: Started Session 11 of User zuul.
Oct  1 09:03:34 np0005464214 python3.9[50183]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:03:35 np0005464214 python3.9[50339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:03:36 np0005464214 python3.9[50495]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:03:37 np0005464214 python3.9[50582]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:03:39 np0005464214 python3.9[50735]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:03:40 np0005464214 python3.9[50931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:41 np0005464214 python3.9[51083]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:03:41 np0005464214 systemd[1]: var-lib-containers-storage-overlay-compat1757742756-merged.mount: Deactivated successfully.
Oct  1 09:03:41 np0005464214 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1787590415-merged.mount: Deactivated successfully.
Oct  1 09:03:41 np0005464214 podman[51084]: 2025-10-01 13:03:41.645788654 +0000 UTC m=+0.057592824 system refresh
Oct  1 09:03:42 np0005464214 python3.9[51246]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:03:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:03:43 np0005464214 python3.9[51369]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323821.8756905-79-96361769066963/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ccae831033b5b85a94db60a554cc1970129a9c74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:03:43 np0005464214 python3.9[51521]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:03:44 np0005464214 python3.9[51644]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323823.4099803-94-54161465617170/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c2a85b7389d30a5066b1ae0058c9a8ae1bc25688 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:03:45 np0005464214 python3.9[51796]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:03:45 np0005464214 python3.9[51948]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:03:46 np0005464214 python3.9[52100]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:03:47 np0005464214 python3.9[52252]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:03:47 np0005464214 python3.9[52404]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:03:49 np0005464214 python3.9[52557]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:03:50 np0005464214 python3.9[52711]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:03:51 np0005464214 python3.9[52863]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:03:51 np0005464214 python3.9[53015]: ansible-service_facts Invoked
Oct  1 09:03:52 np0005464214 network[53032]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:03:52 np0005464214 network[53033]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:03:52 np0005464214 network[53034]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:03:56 np0005464214 python3.9[53488]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:03:58 np0005464214 python3.9[53641]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  1 09:04:00 np0005464214 python3.9[53793]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:00 np0005464214 python3.9[53918]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323839.5770307-226-159997058782265/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:01 np0005464214 python3.9[54072]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:02 np0005464214 python3.9[54197]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323840.9632306-241-276972557380664/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:03 np0005464214 python3.9[54351]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:04 np0005464214 python3.9[54505]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:04:05 np0005464214 python3.9[54589]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:04:06 np0005464214 python3.9[54743]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:04:07 np0005464214 python3.9[54827]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:04:07 np0005464214 chronyd[828]: chronyd exiting
Oct  1 09:04:07 np0005464214 systemd[1]: Stopping NTP client/server...
Oct  1 09:04:07 np0005464214 systemd[1]: chronyd.service: Deactivated successfully.
Oct  1 09:04:07 np0005464214 systemd[1]: Stopped NTP client/server.
Oct  1 09:04:07 np0005464214 systemd[1]: Starting NTP client/server...
Oct  1 09:04:07 np0005464214 chronyd[54836]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  1 09:04:07 np0005464214 chronyd[54836]: Frequency -32.086 +/- 0.168 ppm read from /var/lib/chrony/drift
Oct  1 09:04:07 np0005464214 chronyd[54836]: Loaded seccomp filter (level 2)
Oct  1 09:04:07 np0005464214 systemd[1]: Started NTP client/server.
Oct  1 09:04:07 np0005464214 systemd[1]: session-11.scope: Deactivated successfully.
Oct  1 09:04:07 np0005464214 systemd[1]: session-11.scope: Consumed 25.140s CPU time.
Oct  1 09:04:07 np0005464214 systemd-logind[818]: Session 11 logged out. Waiting for processes to exit.
Oct  1 09:04:07 np0005464214 systemd-logind[818]: Removed session 11.
Oct  1 09:04:13 np0005464214 systemd-logind[818]: New session 12 of user zuul.
Oct  1 09:04:13 np0005464214 systemd[1]: Started Session 12 of User zuul.
Oct  1 09:04:14 np0005464214 python3.9[55020]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:15 np0005464214 python3.9[55172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:16 np0005464214 python3.9[55295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323854.9604046-34-61455332101926/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:16 np0005464214 systemd[1]: session-12.scope: Deactivated successfully.
Oct  1 09:04:16 np0005464214 systemd[1]: session-12.scope: Consumed 1.827s CPU time.
Oct  1 09:04:16 np0005464214 systemd-logind[818]: Session 12 logged out. Waiting for processes to exit.
Oct  1 09:04:16 np0005464214 systemd-logind[818]: Removed session 12.
Oct  1 09:04:22 np0005464214 systemd-logind[818]: New session 13 of user zuul.
Oct  1 09:04:22 np0005464214 systemd[1]: Started Session 13 of User zuul.
Oct  1 09:04:23 np0005464214 python3.9[55475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:04:24 np0005464214 python3.9[55631]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:25 np0005464214 python3.9[55806]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:26 np0005464214 python3.9[55929]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759323864.6009007-41-119665862162150/.source.json _original_basename=.7syhzya4 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:26 np0005464214 python3.9[56081]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:27 np0005464214 python3.9[56204]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323866.3910127-64-145600960011233/.source _original_basename=.jtwhrk_o follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:28 np0005464214 python3.9[56356]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:04:28 np0005464214 python3.9[56508]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:29 np0005464214 python3.9[56631]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323868.19163-88-199167718369256/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:04:29 np0005464214 python3.9[56785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:30 np0005464214 python3.9[56908]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323869.4385297-88-245721431231270/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:04:31 np0005464214 python3.9[57060]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:31 np0005464214 python3.9[57212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:32 np0005464214 python3.9[57335]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323871.4139986-125-144503824893761/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:33 np0005464214 python3.9[57487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:33 np0005464214 python3.9[57610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323872.778861-140-150938901457922/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:35 np0005464214 python3.9[57762]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:04:35 np0005464214 systemd[1]: Reloading.
Oct  1 09:04:35 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:04:35 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:04:35 np0005464214 systemd[1]: Reloading.
Oct  1 09:04:35 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:04:35 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:04:35 np0005464214 systemd[1]: Starting EDPM Container Shutdown...
Oct  1 09:04:35 np0005464214 systemd[1]: Finished EDPM Container Shutdown.
Oct  1 09:04:36 np0005464214 python3.9[57990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:36 np0005464214 python3.9[58113]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323875.8936465-163-215856820675543/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:37 np0005464214 python3.9[58265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:38 np0005464214 python3.9[58388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323877.1891577-178-124389488367802/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:39 np0005464214 python3.9[58540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:04:39 np0005464214 systemd[1]: Reloading.
Oct  1 09:04:39 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:04:39 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:04:39 np0005464214 systemd[1]: Reloading.
Oct  1 09:04:39 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:04:39 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:04:39 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:04:39 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:04:39 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:04:39 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:04:40 np0005464214 python3.9[58766]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:04:40 np0005464214 network[58783]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:04:40 np0005464214 network[58784]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:04:40 np0005464214 network[58785]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:04:46 np0005464214 python3.9[59049]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:04:46 np0005464214 systemd[1]: Reloading.
Oct  1 09:04:46 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:04:46 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:04:46 np0005464214 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  1 09:04:46 np0005464214 iptables.init[59091]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  1 09:04:46 np0005464214 iptables.init[59091]: iptables: Flushing firewall rules: [  OK  ]
Oct  1 09:04:46 np0005464214 systemd[1]: iptables.service: Deactivated successfully.
Oct  1 09:04:46 np0005464214 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  1 09:04:47 np0005464214 python3.9[59287]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:04:48 np0005464214 python3.9[59441]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:04:48 np0005464214 systemd[1]: Reloading.
Oct  1 09:04:48 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:04:48 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:04:48 np0005464214 systemd[1]: Starting Netfilter Tables...
Oct  1 09:04:48 np0005464214 systemd[1]: Finished Netfilter Tables.
Oct  1 09:04:49 np0005464214 python3.9[59633]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:04:50 np0005464214 python3.9[59786]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:04:51 np0005464214 python3.9[59911]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323890.338363-247-271821978966535/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:04:52 np0005464214 python3.9[60062]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:05:17 np0005464214 systemd[1]: session-13.scope: Deactivated successfully.
Oct  1 09:05:17 np0005464214 systemd[1]: session-13.scope: Consumed 20.141s CPU time.
Oct  1 09:05:17 np0005464214 systemd-logind[818]: Session 13 logged out. Waiting for processes to exit.
Oct  1 09:05:17 np0005464214 systemd-logind[818]: Removed session 13.
Oct  1 09:05:30 np0005464214 systemd-logind[818]: New session 14 of user zuul.
Oct  1 09:05:30 np0005464214 systemd[1]: Started Session 14 of User zuul.
Oct  1 09:05:31 np0005464214 python3.9[60259]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:05:32 np0005464214 python3.9[60415]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:33 np0005464214 python3.9[60590]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:33 np0005464214 python3.9[60670]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.sjfer7ai recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:34 np0005464214 python3.9[60822]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:35 np0005464214 python3.9[60900]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.zjkyyie6 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:35 np0005464214 python3.9[61052]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:05:36 np0005464214 python3.9[61204]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:37 np0005464214 python3.9[61282]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:05:37 np0005464214 python3.9[61434]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:38 np0005464214 python3.9[61512]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:05:39 np0005464214 python3.9[61664]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:39 np0005464214 python3.9[61816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:40 np0005464214 python3.9[61894]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:40 np0005464214 python3.9[62046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:41 np0005464214 python3.9[62124]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:42 np0005464214 python3.9[62276]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:05:42 np0005464214 systemd[1]: Reloading.
Oct  1 09:05:42 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:05:42 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:05:44 np0005464214 python3.9[62464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:45 np0005464214 python3.9[62542]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:45 np0005464214 python3.9[62694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:46 np0005464214 python3.9[62772]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:46 np0005464214 python3.9[62924]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:05:46 np0005464214 systemd[1]: Reloading.
Oct  1 09:05:47 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:05:47 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:05:47 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:05:47 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:05:47 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:05:47 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:05:48 np0005464214 python3.9[63114]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:05:48 np0005464214 network[63131]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:05:48 np0005464214 network[63132]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:05:48 np0005464214 network[63133]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:05:53 np0005464214 python3.9[63396]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:54 np0005464214 python3.9[63474]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:54 np0005464214 python3.9[63626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:55 np0005464214 python3.9[63778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:56 np0005464214 python3.9[63901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323955.219621-216-145922568579000/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:57 np0005464214 python3.9[64053]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  1 09:05:57 np0005464214 systemd[1]: Starting Time & Date Service...
Oct  1 09:05:57 np0005464214 systemd[1]: Started Time & Date Service.
Oct  1 09:05:58 np0005464214 python3.9[64209]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:05:59 np0005464214 python3.9[64361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:05:59 np0005464214 python3.9[64486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323958.7204382-251-26978613087359/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:00 np0005464214 python3.9[64638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:01 np0005464214 python3.9[64761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323960.2547834-266-251211334890083/.source.yaml _original_basename=.av8rd0yi follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:02 np0005464214 python3.9[64913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:02 np0005464214 python3.9[65036]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323961.6047213-281-161598109529113/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:03 np0005464214 python3.9[65188]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:04 np0005464214 python3.9[65341]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:05 np0005464214 python3[65494]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 09:06:06 np0005464214 python3.9[65646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:06 np0005464214 python3.9[65769]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323965.712362-320-77308118860645/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:07 np0005464214 python3.9[65921]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:08 np0005464214 python3.9[66044]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323966.9853039-335-66973344245041/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:08 np0005464214 python3.9[66196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:09 np0005464214 python3.9[66319]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323968.3360667-350-234888225112680/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:10 np0005464214 python3.9[66471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:10 np0005464214 python3.9[66594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323969.7851064-365-28532846496043/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:11 np0005464214 python3.9[66746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:06:12 np0005464214 python3.9[66869]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323971.1457484-380-80316002628247/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:12 np0005464214 python3.9[67023]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:13 np0005464214 python3.9[67175]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:14 np0005464214 python3.9[67334]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:15 np0005464214 python3.9[67487]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:16 np0005464214 python3.9[67639]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:17 np0005464214 python3.9[67791]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 09:06:17 np0005464214 python3.9[67944]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 09:06:18 np0005464214 systemd[1]: session-14.scope: Deactivated successfully.
Oct  1 09:06:18 np0005464214 systemd[1]: session-14.scope: Consumed 34.566s CPU time.
Oct  1 09:06:18 np0005464214 systemd-logind[818]: Session 14 logged out. Waiting for processes to exit.
Oct  1 09:06:18 np0005464214 systemd-logind[818]: Removed session 14.
Oct  1 09:06:23 np0005464214 systemd-logind[818]: New session 15 of user zuul.
Oct  1 09:06:23 np0005464214 systemd[1]: Started Session 15 of User zuul.
Oct  1 09:06:24 np0005464214 python3.9[68127]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  1 09:06:25 np0005464214 python3.9[68279]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:06:26 np0005464214 python3.9[68433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:06:27 np0005464214 python3.9[68585]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQuc3bhfyzL595OFOLV247IpwwrNv1jbuEyuIMlhGVL9o/JSyWTFuOVfeOlp2bgaV1HmT029a0g6F2wKmJyCLyTmUlSHjvFu+5OYahUrcWRA5wdTNonHdPtV7OxmGUyid1pIpbNVNRW3jpvnxoiRnI9We0KEWETWj0KsbyuQEnHthqnNEbvu9ZDWHKO3WwnNiEt4TvlIrnPpVac+Q9mG4Iqcsl1qDYx9ZKPuVLtYXvEtxENwTCfYUN7Nt9v/5SUlGTGxFlLR/tBKFw98HNvii7zAkpst6QHrOpcFmWYO6LMkxVjz0aIZvNUsbfKtfnSgjUBuC6Oy/QuzhKisWbFqPENpGofP9VCenS2zfCHewrnjhYCM6/NX7PzTVH0vkxCO2C5+xXm6HIvDZPnYfSL50+z5xfZXpuB7I8mKze82lkWdpFMkvmglXmjoEQgmrbl5kPRhq0yteRkbyyR6B/0X02dml1bPXU3azBrbTQNImgJeKRX8yZGL3Bbsfl5VMT+r8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgRSLYQNGHBrZk4XBkcn+kfWXhVXnPjRWsejgHIwyOG#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQp4ff+5X+OCwYApPStN8XgACWS/2O/jZ6Xj4flPyrz/owAZoGD9kAYm/48KAYQYbXLvyoq8TZyZOgBYKe6Lcs=#012 create=True mode=0644 path=/tmp/ansible.a2z4nrel state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:27 np0005464214 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  1 09:06:28 np0005464214 python3.9[68740]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.a2z4nrel' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:29 np0005464214 python3.9[68895]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.a2z4nrel state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:29 np0005464214 systemd[1]: session-15.scope: Deactivated successfully.
Oct  1 09:06:29 np0005464214 systemd[1]: session-15.scope: Consumed 3.617s CPU time.
Oct  1 09:06:29 np0005464214 systemd-logind[818]: Session 15 logged out. Waiting for processes to exit.
Oct  1 09:06:29 np0005464214 systemd-logind[818]: Removed session 15.
Oct  1 09:06:34 np0005464214 systemd-logind[818]: New session 16 of user zuul.
Oct  1 09:06:34 np0005464214 systemd[1]: Started Session 16 of User zuul.
Oct  1 09:06:35 np0005464214 python3.9[69073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:06:37 np0005464214 python3.9[69229]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  1 09:06:37 np0005464214 python3.9[69383]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:06:38 np0005464214 python3.9[69536]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:39 np0005464214 python3.9[69689]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:06:40 np0005464214 python3.9[69843]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:41 np0005464214 python3.9[69998]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:06:41 np0005464214 systemd-logind[818]: Session 16 logged out. Waiting for processes to exit.
Oct  1 09:06:41 np0005464214 systemd[1]: session-16.scope: Deactivated successfully.
Oct  1 09:06:41 np0005464214 systemd[1]: session-16.scope: Consumed 4.620s CPU time.
Oct  1 09:06:41 np0005464214 systemd-logind[818]: Removed session 16.
Oct  1 09:06:46 np0005464214 systemd-logind[818]: New session 17 of user zuul.
Oct  1 09:06:46 np0005464214 systemd[1]: Started Session 17 of User zuul.
Oct  1 09:06:48 np0005464214 python3.9[70178]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:06:49 np0005464214 python3.9[70334]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:06:50 np0005464214 python3.9[70418]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 09:06:52 np0005464214 python3.9[70569]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:06:53 np0005464214 python3.9[70720]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 09:06:54 np0005464214 python3.9[70870]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:06:54 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:06:55 np0005464214 python3.9[71021]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:06:55 np0005464214 systemd[1]: session-17.scope: Deactivated successfully.
Oct  1 09:06:55 np0005464214 systemd[1]: session-17.scope: Consumed 6.135s CPU time.
Oct  1 09:06:55 np0005464214 systemd-logind[818]: Session 17 logged out. Waiting for processes to exit.
Oct  1 09:06:55 np0005464214 systemd-logind[818]: Removed session 17.
Oct  1 09:07:03 np0005464214 systemd-logind[818]: New session 18 of user zuul.
Oct  1 09:07:03 np0005464214 systemd[1]: Started Session 18 of User zuul.
Oct  1 09:07:10 np0005464214 python3[71789]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:07:12 np0005464214 python3[71884]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 09:07:13 np0005464214 python3[71911]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:13 np0005464214 python3[71937]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:13 np0005464214 kernel: loop: module loaded
Oct  1 09:07:13 np0005464214 kernel: loop3: detected capacity change from 0 to 41943040
Oct  1 09:07:14 np0005464214 python3[71972]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:14 np0005464214 lvm[71975]: PV /dev/loop3 not used.
Oct  1 09:07:14 np0005464214 lvm[71977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 09:07:14 np0005464214 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  1 09:07:14 np0005464214 lvm[71983]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  1 09:07:14 np0005464214 lvm[71987]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 09:07:14 np0005464214 lvm[71987]: VG ceph_vg0 finished
Oct  1 09:07:14 np0005464214 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  1 09:07:15 np0005464214 python3[72065]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:07:15 np0005464214 python3[72138]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324034.7656996-33487-142021680194507/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:16 np0005464214 python3[72188]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:07:17 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:17 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:17 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:17 np0005464214 systemd[1]: Starting Ceph OSD losetup...
Oct  1 09:07:17 np0005464214 bash[72229]: /dev/loop3: [64513]:4328141 (/var/lib/ceph-osd-0.img)
Oct  1 09:07:17 np0005464214 systemd[1]: Finished Ceph OSD losetup.
Oct  1 09:07:17 np0005464214 lvm[72231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 09:07:17 np0005464214 lvm[72231]: VG ceph_vg0 finished
Oct  1 09:07:18 np0005464214 python3[72259]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 09:07:19 np0005464214 python3[72286]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:20 np0005464214 python3[72312]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:20 np0005464214 kernel: loop4: detected capacity change from 0 to 41943040
Oct  1 09:07:20 np0005464214 python3[72343]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:20 np0005464214 lvm[72346]: PV /dev/loop4 not used.
Oct  1 09:07:20 np0005464214 lvm[72355]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 09:07:20 np0005464214 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct  1 09:07:20 np0005464214 lvm[72357]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct  1 09:07:20 np0005464214 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct  1 09:07:21 np0005464214 chronyd[54836]: Selected source 138.197.135.239 (pool.ntp.org)
Oct  1 09:07:21 np0005464214 python3[72435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:07:21 np0005464214 python3[72508]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324040.9066064-33514-2733799240575/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:22 np0005464214 python3[72558]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:07:22 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:22 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:22 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:22 np0005464214 systemd[1]: Starting Ceph OSD losetup...
Oct  1 09:07:22 np0005464214 bash[72598]: /dev/loop4: [64513]:4328191 (/var/lib/ceph-osd-1.img)
Oct  1 09:07:22 np0005464214 systemd[1]: Finished Ceph OSD losetup.
Oct  1 09:07:22 np0005464214 lvm[72600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 09:07:22 np0005464214 lvm[72600]: VG ceph_vg1 finished
Oct  1 09:07:22 np0005464214 python3[72626]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 09:07:24 np0005464214 python3[72653]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:24 np0005464214 python3[72681]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:24 np0005464214 kernel: loop5: detected capacity change from 0 to 41943040
Oct  1 09:07:25 np0005464214 python3[72712]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:25 np0005464214 lvm[72715]: PV /dev/loop5 not used.
Oct  1 09:07:25 np0005464214 lvm[72717]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 09:07:25 np0005464214 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct  1 09:07:25 np0005464214 lvm[72719]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct  1 09:07:25 np0005464214 lvm[72727]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 09:07:25 np0005464214 lvm[72727]: VG ceph_vg2 finished
Oct  1 09:07:25 np0005464214 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct  1 09:07:25 np0005464214 python3[72805]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:07:26 np0005464214 python3[72878]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324045.692942-33541-258765949831840/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:26 np0005464214 python3[72928]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:07:26 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:27 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:27 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:27 np0005464214 systemd[1]: Starting Ceph OSD losetup...
Oct  1 09:07:27 np0005464214 bash[72969]: /dev/loop5: [64513]:4328604 (/var/lib/ceph-osd-2.img)
Oct  1 09:07:27 np0005464214 systemd[1]: Finished Ceph OSD losetup.
Oct  1 09:07:27 np0005464214 lvm[72971]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 09:07:27 np0005464214 lvm[72971]: VG ceph_vg2 finished
Oct  1 09:07:29 np0005464214 python3[72995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:07:31 np0005464214 python3[73088]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  1 09:07:32 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:07:32 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:07:33 np0005464214 python3[73202]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:33 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:07:33 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:07:33 np0005464214 systemd[1]: run-rbc45b61195084f5fae5d5e7be7c8a17a.service: Deactivated successfully.
Oct  1 09:07:34 np0005464214 python3[73233]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:35 np0005464214 python3[73296]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:35 np0005464214 python3[73322]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:36 np0005464214 python3[73400]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:07:36 np0005464214 python3[73473]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324055.8135173-33688-268762967771436/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:37 np0005464214 python3[73575]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:07:37 np0005464214 python3[73648]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324056.9729111-33706-15176687737777/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:07:38 np0005464214 python3[73698]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:38 np0005464214 python3[73726]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:38 np0005464214 python3[73754]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:07:39 np0005464214 python3[73782]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:07:39 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:39 np0005464214 systemd-logind[818]: New session 19 of user ceph-admin.
Oct  1 09:07:39 np0005464214 systemd[1]: Created slice User Slice of UID 42477.
Oct  1 09:07:39 np0005464214 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  1 09:07:39 np0005464214 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  1 09:07:39 np0005464214 systemd[1]: Starting User Manager for UID 42477...
Oct  1 09:07:39 np0005464214 systemd[73802]: Queued start job for default target Main User Target.
Oct  1 09:07:39 np0005464214 systemd[73802]: Created slice User Application Slice.
Oct  1 09:07:39 np0005464214 systemd[73802]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  1 09:07:39 np0005464214 systemd[73802]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 09:07:39 np0005464214 systemd[73802]: Reached target Paths.
Oct  1 09:07:39 np0005464214 systemd[73802]: Reached target Timers.
Oct  1 09:07:39 np0005464214 systemd[73802]: Starting D-Bus User Message Bus Socket...
Oct  1 09:07:39 np0005464214 systemd[73802]: Starting Create User's Volatile Files and Directories...
Oct  1 09:07:39 np0005464214 systemd[73802]: Finished Create User's Volatile Files and Directories.
Oct  1 09:07:39 np0005464214 systemd[73802]: Listening on D-Bus User Message Bus Socket.
Oct  1 09:07:39 np0005464214 systemd[73802]: Reached target Sockets.
Oct  1 09:07:39 np0005464214 systemd[73802]: Reached target Basic System.
Oct  1 09:07:39 np0005464214 systemd[73802]: Reached target Main User Target.
Oct  1 09:07:39 np0005464214 systemd[73802]: Startup finished in 170ms.
Oct  1 09:07:39 np0005464214 systemd[1]: Started User Manager for UID 42477.
Oct  1 09:07:39 np0005464214 systemd[1]: Started Session 19 of User ceph-admin.
Oct  1 09:07:39 np0005464214 systemd[1]: session-19.scope: Deactivated successfully.
Oct  1 09:07:39 np0005464214 systemd-logind[818]: Session 19 logged out. Waiting for processes to exit.
Oct  1 09:07:39 np0005464214 systemd-logind[818]: Removed session 19.
Oct  1 09:07:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-compat1464513559-lower\x2dmapped.mount: Deactivated successfully.
Oct  1 09:07:50 np0005464214 systemd[1]: Stopping User Manager for UID 42477...
Oct  1 09:07:50 np0005464214 systemd[73802]: Activating special unit Exit the Session...
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped target Main User Target.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped target Basic System.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped target Paths.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped target Sockets.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped target Timers.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  1 09:07:50 np0005464214 systemd[73802]: Closed D-Bus User Message Bus Socket.
Oct  1 09:07:50 np0005464214 systemd[73802]: Stopped Create User's Volatile Files and Directories.
Oct  1 09:07:50 np0005464214 systemd[73802]: Removed slice User Application Slice.
Oct  1 09:07:50 np0005464214 systemd[73802]: Reached target Shutdown.
Oct  1 09:07:50 np0005464214 systemd[73802]: Finished Exit the Session.
Oct  1 09:07:50 np0005464214 systemd[73802]: Reached target Exit the Session.
Oct  1 09:07:50 np0005464214 systemd[1]: user@42477.service: Deactivated successfully.
Oct  1 09:07:50 np0005464214 systemd[1]: Stopped User Manager for UID 42477.
Oct  1 09:07:50 np0005464214 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  1 09:07:50 np0005464214 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  1 09:07:50 np0005464214 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  1 09:07:50 np0005464214 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  1 09:07:50 np0005464214 systemd[1]: Removed slice User Slice of UID 42477.
Oct  1 09:07:53 np0005464214 podman[73855]: 2025-10-01 13:07:53.43011535 +0000 UTC m=+13.380545553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:53 np0005464214 podman[73922]: 2025-10-01 13:07:53.531841557 +0000 UTC m=+0.060812520 container create bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:07:53 np0005464214 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  1 09:07:53 np0005464214 systemd[1]: Started libpod-conmon-bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a.scope.
Oct  1 09:07:53 np0005464214 podman[73922]: 2025-10-01 13:07:53.507545807 +0000 UTC m=+0.036516750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:53 np0005464214 podman[73922]: 2025-10-01 13:07:53.643297214 +0000 UTC m=+0.172268227 container init bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:07:53 np0005464214 podman[73922]: 2025-10-01 13:07:53.650487162 +0000 UTC m=+0.179458095 container start bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:07:53 np0005464214 podman[73922]: 2025-10-01 13:07:53.653997884 +0000 UTC m=+0.182968897 container attach bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:07:53 np0005464214 elegant_margulis[73938]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  1 09:07:53 np0005464214 systemd[1]: libpod-bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a.scope: Deactivated successfully.
Oct  1 09:07:53 np0005464214 podman[73922]: 2025-10-01 13:07:53.963587576 +0000 UTC m=+0.492558499 container died bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:07:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-70c0be3d06252a15283f3636d0739b7f73d3cf0a8b7aec486ecb25e9fb55c09d-merged.mount: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[73922]: 2025-10-01 13:07:54.022376331 +0000 UTC m=+0.551347244 container remove bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-conmon-bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.091521065 +0000 UTC m=+0.043566773 container create bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 09:07:54 np0005464214 systemd[1]: Started libpod-conmon-bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6.scope.
Oct  1 09:07:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.158545691 +0000 UTC m=+0.110591419 container init bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.167804575 +0000 UTC m=+0.119850313 container start bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.071268493 +0000 UTC m=+0.023314211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:54 np0005464214 vigilant_jemison[73972]: 167 167
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.1720199 +0000 UTC m=+0.124065618 container attach bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.172410102 +0000 UTC m=+0.124455800 container died bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:07:54 np0005464214 podman[73957]: 2025-10-01 13:07:54.211721258 +0000 UTC m=+0.163766986 container remove bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-conmon-bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.295174777 +0000 UTC m=+0.052486787 container create 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:07:54 np0005464214 systemd[1]: Started libpod-conmon-50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64.scope.
Oct  1 09:07:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.351273607 +0000 UTC m=+0.108585627 container init 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.359787307 +0000 UTC m=+0.117099337 container start 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.363269157 +0000 UTC m=+0.120581157 container attach 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.276631098 +0000 UTC m=+0.033943148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:54 np0005464214 vigilant_maxwell[74006]: AQCqJ91oUnhwFhAABaeVGSJyDEVZ7+ahmpC9kw==
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.379386099 +0000 UTC m=+0.136698099 container died 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:07:54 np0005464214 podman[73989]: 2025-10-01 13:07:54.413530482 +0000 UTC m=+0.170842482 container remove 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-conmon-50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.49321595 +0000 UTC m=+0.048520711 container create 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:07:54 np0005464214 systemd[1]: Started libpod-conmon-42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a.scope.
Oct  1 09:07:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.551765368 +0000 UTC m=+0.107070149 container init 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.556163157 +0000 UTC m=+0.111467918 container start 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.559539354 +0000 UTC m=+0.114844135 container attach 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.478119501 +0000 UTC m=+0.033424282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:54 np0005464214 friendly_benz[74041]: AQCqJ91omZAjIxAAAr0DNz1fyp3+kL33rG2Ijg==
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.594649999 +0000 UTC m=+0.149954760 container died 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:07:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-454afb3821cbde2544c55048ff76497c0d7c6b96b761424f6b5415f99d25ce35-merged.mount: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[74025]: 2025-10-01 13:07:54.624359341 +0000 UTC m=+0.179664102 container remove 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:07:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:54 np0005464214 systemd[1]: libpod-conmon-42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a.scope: Deactivated successfully.
Oct  1 09:07:54 np0005464214 podman[74058]: 2025-10-01 13:07:54.689897681 +0000 UTC m=+0.044218945 container create aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:07:54 np0005464214 systemd[1]: Started libpod-conmon-aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2.scope.
Oct  1 09:07:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:54 np0005464214 podman[74058]: 2025-10-01 13:07:54.671035762 +0000 UTC m=+0.025357056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:55 np0005464214 podman[74058]: 2025-10-01 13:07:55.097947098 +0000 UTC m=+0.452268392 container init aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:07:55 np0005464214 podman[74058]: 2025-10-01 13:07:55.105497647 +0000 UTC m=+0.459818921 container start aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:07:55 np0005464214 podman[74058]: 2025-10-01 13:07:55.110892028 +0000 UTC m=+0.465213302 container attach aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:07:55 np0005464214 affectionate_cray[74077]: AQCrJ91o0O20BxAAbuNQlAgvDf2C/Y5Su5seBA==
Oct  1 09:07:55 np0005464214 systemd[1]: libpod-aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2.scope: Deactivated successfully.
Oct  1 09:07:55 np0005464214 podman[74058]: 2025-10-01 13:07:55.133188555 +0000 UTC m=+0.487509869 container died aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:07:55 np0005464214 podman[74058]: 2025-10-01 13:07:55.180962911 +0000 UTC m=+0.535284205 container remove aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:07:55 np0005464214 systemd[1]: libpod-conmon-aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2.scope: Deactivated successfully.
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.257430107 +0000 UTC m=+0.050855464 container create ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:07:55 np0005464214 systemd[1]: Started libpod-conmon-ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0.scope.
Oct  1 09:07:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468441a90a191f97c60dd7dfc5dda7211a4b8916ed54f73ddf537b983844191/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.323229825 +0000 UTC m=+0.116655242 container init ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.328473652 +0000 UTC m=+0.121899019 container start ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.234080396 +0000 UTC m=+0.027505763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.331954961 +0000 UTC m=+0.125380328 container attach ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:07:55 np0005464214 awesome_matsumoto[74114]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  1 09:07:55 np0005464214 awesome_matsumoto[74114]: setting min_mon_release = pacific
Oct  1 09:07:55 np0005464214 awesome_matsumoto[74114]: /usr/bin/monmaptool: set fsid to eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:07:55 np0005464214 awesome_matsumoto[74114]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  1 09:07:55 np0005464214 systemd[1]: libpod-ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0.scope: Deactivated successfully.
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.36815526 +0000 UTC m=+0.161580607 container died ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:07:55 np0005464214 podman[74097]: 2025-10-01 13:07:55.399740622 +0000 UTC m=+0.193165969 container remove ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:07:55 np0005464214 systemd[1]: libpod-conmon-ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0.scope: Deactivated successfully.
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.465628354 +0000 UTC m=+0.043727729 container create e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:07:55 np0005464214 systemd[1]: Started libpod-conmon-e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc.scope.
Oct  1 09:07:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.44597554 +0000 UTC m=+0.024074905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.556679092 +0000 UTC m=+0.134778477 container init e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.562234958 +0000 UTC m=+0.140334323 container start e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.565219213 +0000 UTC m=+0.143318598 container attach e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:07:55 np0005464214 systemd[1]: libpod-e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc.scope: Deactivated successfully.
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.656340424 +0000 UTC m=+0.234439769 container died e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:07:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7-merged.mount: Deactivated successfully.
Oct  1 09:07:55 np0005464214 podman[74134]: 2025-10-01 13:07:55.689782876 +0000 UTC m=+0.267882231 container remove e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:07:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:55 np0005464214 systemd[1]: libpod-conmon-e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc.scope: Deactivated successfully.
Oct  1 09:07:55 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:55 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:55 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:55 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:56 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:56 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:56 np0005464214 systemd[1]: Reached target All Ceph clusters and services.
Oct  1 09:07:56 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:56 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:56 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:56 np0005464214 systemd[1]: Reached target Ceph cluster eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:07:56 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:56 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:56 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:56 np0005464214 systemd[1]: Reloading.
Oct  1 09:07:56 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:07:56 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:07:56 np0005464214 systemd[1]: Created slice Slice /system/ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:07:56 np0005464214 systemd[1]: Reached target System Time Set.
Oct  1 09:07:56 np0005464214 systemd[1]: Reached target System Time Synchronized.
Oct  1 09:07:56 np0005464214 systemd[1]: Starting Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:07:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:57 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:57 np0005464214 podman[74425]: 2025-10-01 13:07:57.140163533 +0000 UTC m=+0.034974180 container create c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 podman[74425]: 2025-10-01 13:07:57.201433258 +0000 UTC m=+0.096243985 container init c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:07:57 np0005464214 podman[74425]: 2025-10-01 13:07:57.207108918 +0000 UTC m=+0.101919605 container start c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:07:57 np0005464214 bash[74425]: c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008
Oct  1 09:07:57 np0005464214 podman[74425]: 2025-10-01 13:07:57.124361502 +0000 UTC m=+0.019172169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:57 np0005464214 systemd[1]: Started Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: pidfile_write: ignore empty --pid-file
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: load: jerasure load: lrc 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Git sha 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: DB SUMMARY
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: DB Session ID:  CA7YKDRE0VP79L6Q3AHS
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                                     Options.env: 0x555a17577c40
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                                Options.info_log: 0x555a189c2e80
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                                 Options.wal_dir: 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                    Options.write_buffer_manager: 0x555a189d2b40
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                               Options.row_cache: None
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                              Options.wal_filter: None
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.wal_compression: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.max_background_jobs: 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.max_total_wal_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:       Options.compaction_readahead_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Compression algorithms supported:
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kZSTD supported: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:           Options.merge_operator: 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:        Options.compaction_filter: None
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555a189c2a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555a189bb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:        Options.write_buffer_size: 33554432
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:  Options.max_write_buffer_number: 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.compression: NoCompression
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.num_levels: 7
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324077272303, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324077273977, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "CA7YKDRE0VP79L6Q3AHS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324077274109, "job": 1, "event": "recovery_finished"}
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555a189e4e00
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: DB pointer 0x555a18a6e000
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555a189bb1f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@-1(???) e0 preinit fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.302641988 +0000 UTC m=+0.056015818 container create ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-01T13:07:55.606214Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).mds e1 new map
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(cluster) log [DBG] : fsmap 
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mkfs eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 09:07:57 np0005464214 systemd[1]: Started libpod-conmon-ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979.scope.
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.279126442 +0000 UTC m=+0.032500352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de08cd6a6b388f86b0540f71bd401b67f4c91ce29ed0ff6bae3855b9fb6596d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de08cd6a6b388f86b0540f71bd401b67f4c91ce29ed0ff6bae3855b9fb6596d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de08cd6a6b388f86b0540f71bd401b67f4c91ce29ed0ff6bae3855b9fb6596d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.411320487 +0000 UTC m=+0.164694367 container init ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.417543814 +0000 UTC m=+0.170917664 container start ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.420799637 +0000 UTC m=+0.174173517 container attach ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  1 09:07:57 np0005464214 ceph-mon[74447]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846297465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:  cluster:
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    id:     eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    health: HEALTH_OK
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]: 
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:  services:
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    mon: 1 daemons, quorum compute-0 (age 0.477885s)
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    mgr: no daemons active
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    osd: 0 osds: 0 up, 0 in
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]: 
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:  data:
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    pools:   0 pools, 0 pgs
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    objects: 0 objects, 0 B
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    usage:   0 B used, 0 B / 0 B avail
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]:    pgs:     
Oct  1 09:07:57 np0005464214 lucid_ardinghelli[74503]: 
Oct  1 09:07:57 np0005464214 systemd[1]: libpod-ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979.scope: Deactivated successfully.
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.806916548 +0000 UTC m=+0.560290408 container died ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:07:57 np0005464214 podman[74448]: 2025-10-01 13:07:57.847997052 +0000 UTC m=+0.601370882 container remove ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:07:57 np0005464214 systemd[1]: libpod-conmon-ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979.scope: Deactivated successfully.
Oct  1 09:07:57 np0005464214 podman[74540]: 2025-10-01 13:07:57.944144502 +0000 UTC m=+0.066442419 container create 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:07:57 np0005464214 systemd[1]: Started libpod-conmon-88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3.scope.
Oct  1 09:07:58 np0005464214 podman[74540]: 2025-10-01 13:07:57.908197031 +0000 UTC m=+0.030494999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:58 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 podman[74540]: 2025-10-01 13:07:58.026542527 +0000 UTC m=+0.148840424 container init 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:07:58 np0005464214 podman[74540]: 2025-10-01 13:07:58.037510255 +0000 UTC m=+0.159808122 container start 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:07:58 np0005464214 podman[74540]: 2025-10-01 13:07:58.040692966 +0000 UTC m=+0.162990873 container attach 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:07:58 np0005464214 ceph-mon[74447]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 09:07:58 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  1 09:07:58 np0005464214 ceph-mon[74447]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 09:07:58 np0005464214 ceph-mon[74447]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 09:07:58 np0005464214 focused_ishizaka[74556]: 
Oct  1 09:07:58 np0005464214 focused_ishizaka[74556]: [global]
Oct  1 09:07:58 np0005464214 focused_ishizaka[74556]: #011fsid = eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:07:58 np0005464214 focused_ishizaka[74556]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  1 09:07:58 np0005464214 focused_ishizaka[74556]: #011osd_crush_chooseleaf_type = 0
Oct  1 09:07:58 np0005464214 systemd[1]: libpod-88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3.scope: Deactivated successfully.
Oct  1 09:07:58 np0005464214 podman[74540]: 2025-10-01 13:07:58.420912419 +0000 UTC m=+0.543210306 container died 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:07:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd-merged.mount: Deactivated successfully.
Oct  1 09:07:58 np0005464214 podman[74540]: 2025-10-01 13:07:58.45970411 +0000 UTC m=+0.582001997 container remove 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:07:58 np0005464214 systemd[1]: libpod-conmon-88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3.scope: Deactivated successfully.
Oct  1 09:07:58 np0005464214 podman[74594]: 2025-10-01 13:07:58.530350482 +0000 UTC m=+0.049771900 container create 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:07:58 np0005464214 systemd[1]: Started libpod-conmon-9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8.scope.
Oct  1 09:07:58 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:07:58 np0005464214 podman[74594]: 2025-10-01 13:07:58.502846729 +0000 UTC m=+0.022268197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:07:58 np0005464214 podman[74594]: 2025-10-01 13:07:58.605712803 +0000 UTC m=+0.125134201 container init 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:07:58 np0005464214 podman[74594]: 2025-10-01 13:07:58.614685137 +0000 UTC m=+0.134106525 container start 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:07:58 np0005464214 podman[74594]: 2025-10-01 13:07:58.617499197 +0000 UTC m=+0.136920585 container attach 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416105299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:07:59 np0005464214 systemd[1]: libpod-9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8.scope: Deactivated successfully.
Oct  1 09:07:59 np0005464214 podman[74594]: 2025-10-01 13:07:59.040453146 +0000 UTC m=+0.559874574 container died 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:07:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d-merged.mount: Deactivated successfully.
Oct  1 09:07:59 np0005464214 podman[74594]: 2025-10-01 13:07:59.083850893 +0000 UTC m=+0.603272281 container remove 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:07:59 np0005464214 systemd[1]: libpod-conmon-9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8.scope: Deactivated successfully.
Oct  1 09:07:59 np0005464214 systemd[1]: Stopping Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: mon.compute-0@0(leader) e1 shutdown
Oct  1 09:07:59 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0[74443]: 2025-10-01T13:07:59.539+0000 7f3a242dc640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  1 09:07:59 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0[74443]: 2025-10-01T13:07:59.539+0000 7f3a242dc640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 09:07:59 np0005464214 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 09:07:59 np0005464214 podman[74679]: 2025-10-01 13:07:59.641370373 +0000 UTC m=+0.365782297 container died c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:07:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6-merged.mount: Deactivated successfully.
Oct  1 09:07:59 np0005464214 podman[74679]: 2025-10-01 13:07:59.747480289 +0000 UTC m=+0.471892233 container remove c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:07:59 np0005464214 bash[74679]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0
Oct  1 09:07:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:07:59 np0005464214 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mon.compute-0.service: Deactivated successfully.
Oct  1 09:07:59 np0005464214 systemd[1]: Stopped Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:07:59 np0005464214 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mon.compute-0.service: Consumed 1.012s CPU time.
Oct  1 09:07:59 np0005464214 systemd[1]: Starting Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:08:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:08:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  1 09:08:00 np0005464214 podman[74782]: 2025-10-01 13:08:00.175012593 +0000 UTC m=+0.047087985 container create dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 podman[74782]: 2025-10-01 13:08:00.148489992 +0000 UTC m=+0.020565374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:00 np0005464214 podman[74782]: 2025-10-01 13:08:00.255852729 +0000 UTC m=+0.127928131 container init dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:00 np0005464214 podman[74782]: 2025-10-01 13:08:00.260840097 +0000 UTC m=+0.132915469 container start dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:08:00 np0005464214 bash[74782]: dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320
Oct  1 09:08:00 np0005464214 systemd[1]: Started Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: pidfile_write: ignore empty --pid-file
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: load: jerasure load: lrc 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Git sha 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: DB SUMMARY
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: DB Session ID:  NJZTWL88H5HSB4Q4NEC9
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55668 ; 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                                     Options.env: 0x55daa30d0c40
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                                Options.info_log: 0x55daa554b040
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                                 Options.wal_dir: 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                    Options.write_buffer_manager: 0x55daa555ab40
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                               Options.row_cache: None
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                              Options.wal_filter: None
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.wal_compression: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.max_background_jobs: 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.max_total_wal_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:       Options.compaction_readahead_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Compression algorithms supported:
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kZSTD supported: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:           Options.merge_operator: 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:        Options.compaction_filter: None
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daa554ac40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daa55431f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:        Options.write_buffer_size: 33554432
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:  Options.max_write_buffer_number: 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.compression: NoCompression
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.num_levels: 7
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324080294585, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324080306270, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53789, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51378, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324080, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324080306365, "job": 1, "event": "recovery_finished"}
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55daa556ce00
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: DB pointer 0x55daa55f6000
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.85 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      2/0   55.85 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???) e1 preinit fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).mds e1 new map
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap 
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  1 09:08:00 np0005464214 podman[74803]: 2025-10-01 13:08:00.355768629 +0000 UTC m=+0.056040389 container create e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  1 09:08:00 np0005464214 systemd[1]: Started libpod-conmon-e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511.scope.
Oct  1 09:08:00 np0005464214 podman[74803]: 2025-10-01 13:08:00.326840341 +0000 UTC m=+0.027112171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434123d9f0ea530af3ede4cf6b664d15441d268877432e11c8ea1a799be4d868/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434123d9f0ea530af3ede4cf6b664d15441d268877432e11c8ea1a799be4d868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434123d9f0ea530af3ede4cf6b664d15441d268877432e11c8ea1a799be4d868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:00 np0005464214 podman[74803]: 2025-10-01 13:08:00.482409877 +0000 UTC m=+0.182681627 container init e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:00 np0005464214 podman[74803]: 2025-10-01 13:08:00.492663612 +0000 UTC m=+0.192935372 container start e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:00 np0005464214 podman[74803]: 2025-10-01 13:08:00.511864331 +0000 UTC m=+0.212136101 container attach e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:08:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct  1 09:08:00 np0005464214 systemd[1]: libpod-e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511.scope: Deactivated successfully.
Oct  1 09:08:00 np0005464214 podman[74803]: 2025-10-01 13:08:00.950376254 +0000 UTC m=+0.650647994 container died e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:08:01 np0005464214 podman[74803]: 2025-10-01 13:08:01.033980587 +0000 UTC m=+0.734252317 container remove e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:01 np0005464214 systemd[1]: libpod-conmon-e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511.scope: Deactivated successfully.
Oct  1 09:08:01 np0005464214 podman[74898]: 2025-10-01 13:08:01.110428412 +0000 UTC m=+0.055605375 container create de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:01 np0005464214 systemd[1]: Started libpod-conmon-de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff.scope.
Oct  1 09:08:01 np0005464214 podman[74898]: 2025-10-01 13:08:01.083930962 +0000 UTC m=+0.029108005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:01 np0005464214 podman[74898]: 2025-10-01 13:08:01.202417771 +0000 UTC m=+0.147594754 container init de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:08:01 np0005464214 podman[74898]: 2025-10-01 13:08:01.212848733 +0000 UTC m=+0.158025696 container start de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:08:01 np0005464214 podman[74898]: 2025-10-01 13:08:01.216301201 +0000 UTC m=+0.161478174 container attach de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct  1 09:08:01 np0005464214 systemd[1]: libpod-de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff.scope: Deactivated successfully.
Oct  1 09:08:01 np0005464214 podman[74940]: 2025-10-01 13:08:01.663756989 +0000 UTC m=+0.022708211 container died de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:08:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038-merged.mount: Deactivated successfully.
Oct  1 09:08:01 np0005464214 podman[74940]: 2025-10-01 13:08:01.697909942 +0000 UTC m=+0.056861144 container remove de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:01 np0005464214 systemd[1]: libpod-conmon-de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff.scope: Deactivated successfully.
Oct  1 09:08:01 np0005464214 systemd[1]: Reloading.
Oct  1 09:08:01 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:08:01 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:08:02 np0005464214 systemd[1]: Reloading.
Oct  1 09:08:02 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:08:02 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:08:02 np0005464214 systemd[1]: Starting Ceph mgr.compute-0.puxjpb for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:08:02 np0005464214 podman[75083]: 2025-10-01 13:08:02.588491629 +0000 UTC m=+0.056814053 container create d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/var/lib/ceph/mgr/ceph-compute-0.puxjpb supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 podman[75083]: 2025-10-01 13:08:02.656072413 +0000 UTC m=+0.124394877 container init d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:02 np0005464214 podman[75083]: 2025-10-01 13:08:02.568640989 +0000 UTC m=+0.036963463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:02 np0005464214 podman[75083]: 2025-10-01 13:08:02.667916769 +0000 UTC m=+0.136239213 container start d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:08:02 np0005464214 bash[75083]: d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163
Oct  1 09:08:02 np0005464214 systemd[1]: Started Ceph mgr.compute-0.puxjpb for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:08:02 np0005464214 ceph-mgr[75103]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:08:02 np0005464214 ceph-mgr[75103]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  1 09:08:02 np0005464214 ceph-mgr[75103]: pidfile_write: ignore empty --pid-file
Oct  1 09:08:02 np0005464214 podman[75104]: 2025-10-01 13:08:02.757483951 +0000 UTC m=+0.049302825 container create 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:02 np0005464214 systemd[1]: Started libpod-conmon-63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37.scope.
Oct  1 09:08:02 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'alerts'
Oct  1 09:08:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:02 np0005464214 podman[75104]: 2025-10-01 13:08:02.731896709 +0000 UTC m=+0.023715593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:02 np0005464214 podman[75104]: 2025-10-01 13:08:02.859427296 +0000 UTC m=+0.151246210 container init 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:08:02 np0005464214 podman[75104]: 2025-10-01 13:08:02.867183052 +0000 UTC m=+0.159001896 container start 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:02 np0005464214 podman[75104]: 2025-10-01 13:08:02.875125763 +0000 UTC m=+0.166944637 container attach 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:03 np0005464214 ceph-mgr[75103]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 09:08:03 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'balancer'
Oct  1 09:08:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:03.118+0000 7f0e0936f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 09:08:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042162465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:03 np0005464214 condescending_allen[75144]: 
Oct  1 09:08:03 np0005464214 condescending_allen[75144]: {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "health": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "status": "HEALTH_OK",
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "checks": {},
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "mutes": []
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "election_epoch": 5,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "quorum": [
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        0
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    ],
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "quorum_names": [
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "compute-0"
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    ],
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "quorum_age": 2,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "monmap": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "epoch": 1,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "min_mon_release_name": "reef",
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_mons": 1
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "osdmap": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "epoch": 1,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_osds": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_up_osds": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "osd_up_since": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_in_osds": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "osd_in_since": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_remapped_pgs": 0
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "pgmap": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "pgs_by_state": [],
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_pgs": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_pools": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_objects": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "data_bytes": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "bytes_used": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "bytes_avail": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "bytes_total": 0
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "fsmap": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "epoch": 1,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "by_rank": [],
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "up:standby": 0
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "mgrmap": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "available": false,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "num_standbys": 0,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "modules": [
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:            "iostat",
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:            "nfs",
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:            "restful"
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        ],
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "services": {}
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "servicemap": {
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "epoch": 1,
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:        "services": {}
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    },
Oct  1 09:08:03 np0005464214 condescending_allen[75144]:    "progress_events": {}
Oct  1 09:08:03 np0005464214 condescending_allen[75144]: }
Oct  1 09:08:03 np0005464214 systemd[1]: libpod-63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37.scope: Deactivated successfully.
Oct  1 09:08:03 np0005464214 podman[75104]: 2025-10-01 13:08:03.27590747 +0000 UTC m=+0.567726294 container died 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:08:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b-merged.mount: Deactivated successfully.
Oct  1 09:08:03 np0005464214 ceph-mgr[75103]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 09:08:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:03.362+0000 7f0e0936f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 09:08:03 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'cephadm'
Oct  1 09:08:03 np0005464214 podman[75104]: 2025-10-01 13:08:03.384719662 +0000 UTC m=+0.676538526 container remove 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:08:03 np0005464214 systemd[1]: libpod-conmon-63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37.scope: Deactivated successfully.
Oct  1 09:08:05 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'crash'
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.460922816 +0000 UTC m=+0.047041873 container create 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:08:05 np0005464214 systemd[1]: Started libpod-conmon-01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49.scope.
Oct  1 09:08:05 np0005464214 ceph-mgr[75103]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 09:08:05 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'dashboard'
Oct  1 09:08:05 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:05.519+0000 7f0e0936f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 09:08:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.44337423 +0000 UTC m=+0.029493297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.541544374 +0000 UTC m=+0.127663441 container init 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.55244554 +0000 UTC m=+0.138564587 container start 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.557152059 +0000 UTC m=+0.143271106 container attach 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:08:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181456265' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]: 
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]: {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "health": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "status": "HEALTH_OK",
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "checks": {},
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "mutes": []
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "election_epoch": 5,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "quorum": [
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        0
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    ],
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "quorum_names": [
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "compute-0"
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    ],
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "quorum_age": 5,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "monmap": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "epoch": 1,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "min_mon_release_name": "reef",
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_mons": 1
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "osdmap": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "epoch": 1,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_osds": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_up_osds": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "osd_up_since": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_in_osds": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "osd_in_since": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_remapped_pgs": 0
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "pgmap": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "pgs_by_state": [],
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_pgs": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_pools": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_objects": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "data_bytes": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "bytes_used": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "bytes_avail": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "bytes_total": 0
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "fsmap": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "epoch": 1,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "by_rank": [],
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "up:standby": 0
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "mgrmap": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "available": false,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "num_standbys": 0,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "modules": [
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:            "iostat",
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:            "nfs",
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:            "restful"
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        ],
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "services": {}
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "servicemap": {
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "epoch": 1,
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:        "services": {}
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    },
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]:    "progress_events": {}
Oct  1 09:08:05 np0005464214 dreamy_mayer[75211]: }
Oct  1 09:08:05 np0005464214 systemd[1]: libpod-01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49.scope: Deactivated successfully.
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.944489499 +0000 UTC m=+0.530608536 container died 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:08:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8-merged.mount: Deactivated successfully.
Oct  1 09:08:05 np0005464214 podman[75194]: 2025-10-01 13:08:05.990143707 +0000 UTC m=+0.576262764 container remove 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:08:06 np0005464214 systemd[1]: libpod-conmon-01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49.scope: Deactivated successfully.
Oct  1 09:08:06 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'devicehealth'
Oct  1 09:08:07 np0005464214 ceph-mgr[75103]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 09:08:07 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:07.134+0000 7f0e0936f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 09:08:07 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'diskprediction_local'
Oct  1 09:08:07 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  1 09:08:07 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  1 09:08:07 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]:  from numpy import show_config as show_numpy_config
Oct  1 09:08:07 np0005464214 ceph-mgr[75103]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 09:08:07 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:07.629+0000 7f0e0936f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 09:08:07 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'influx'
Oct  1 09:08:07 np0005464214 ceph-mgr[75103]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 09:08:07 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:07.857+0000 7f0e0936f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 09:08:07 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'insights'
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.055560279 +0000 UTC m=+0.043116888 container create d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:08:08 np0005464214 systemd[1]: Started libpod-conmon-d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196.scope.
Oct  1 09:08:08 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'iostat'
Oct  1 09:08:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.121031147 +0000 UTC m=+0.108587816 container init d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.126706367 +0000 UTC m=+0.114263006 container start d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.034568793 +0000 UTC m=+0.022125412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.130688374 +0000 UTC m=+0.118245083 container attach d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:08:08 np0005464214 ceph-mgr[75103]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 09:08:08 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:08.331+0000 7f0e0936f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 09:08:08 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'k8sevents'
Oct  1 09:08:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524194599' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:08 np0005464214 boring_tu[75265]: 
Oct  1 09:08:08 np0005464214 boring_tu[75265]: {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "health": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "status": "HEALTH_OK",
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "checks": {},
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "mutes": []
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "election_epoch": 5,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "quorum": [
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        0
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    ],
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "quorum_names": [
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "compute-0"
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    ],
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "quorum_age": 8,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "monmap": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "epoch": 1,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "min_mon_release_name": "reef",
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_mons": 1
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "osdmap": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "epoch": 1,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_osds": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_up_osds": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "osd_up_since": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_in_osds": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "osd_in_since": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_remapped_pgs": 0
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "pgmap": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "pgs_by_state": [],
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_pgs": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_pools": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_objects": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "data_bytes": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "bytes_used": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "bytes_avail": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "bytes_total": 0
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "fsmap": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "epoch": 1,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "by_rank": [],
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "up:standby": 0
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "mgrmap": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "available": false,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "num_standbys": 0,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "modules": [
Oct  1 09:08:08 np0005464214 boring_tu[75265]:            "iostat",
Oct  1 09:08:08 np0005464214 boring_tu[75265]:            "nfs",
Oct  1 09:08:08 np0005464214 boring_tu[75265]:            "restful"
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        ],
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "services": {}
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "servicemap": {
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "epoch": 1,
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:08 np0005464214 boring_tu[75265]:        "services": {}
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    },
Oct  1 09:08:08 np0005464214 boring_tu[75265]:    "progress_events": {}
Oct  1 09:08:08 np0005464214 boring_tu[75265]: }
Oct  1 09:08:08 np0005464214 systemd[1]: libpod-d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196.scope: Deactivated successfully.
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.511286409 +0000 UTC m=+0.498843018 container died d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:08:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2-merged.mount: Deactivated successfully.
Oct  1 09:08:08 np0005464214 podman[75249]: 2025-10-01 13:08:08.570682073 +0000 UTC m=+0.558238712 container remove d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:08:08 np0005464214 systemd[1]: libpod-conmon-d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196.scope: Deactivated successfully.
Oct  1 09:08:10 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'localpool'
Oct  1 09:08:10 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'mds_autoscaler'
Oct  1 09:08:10 np0005464214 podman[75303]: 2025-10-01 13:08:10.722712854 +0000 UTC m=+0.116177728 container create 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:10 np0005464214 podman[75303]: 2025-10-01 13:08:10.645916937 +0000 UTC m=+0.039381851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:11 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'mirroring'
Oct  1 09:08:11 np0005464214 systemd[1]: Started libpod-conmon-908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419.scope.
Oct  1 09:08:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:11 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'nfs'
Oct  1 09:08:11 np0005464214 podman[75303]: 2025-10-01 13:08:11.637630092 +0000 UTC m=+1.031095026 container init 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:08:11 np0005464214 podman[75303]: 2025-10-01 13:08:11.643612472 +0000 UTC m=+1.037077346 container start 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:08:11 np0005464214 podman[75303]: 2025-10-01 13:08:11.64736826 +0000 UTC m=+1.040833204 container attach 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:08:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322485282' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:11 np0005464214 agitated_carver[75319]: 
Oct  1 09:08:11 np0005464214 agitated_carver[75319]: {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "health": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "status": "HEALTH_OK",
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "checks": {},
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "mutes": []
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "election_epoch": 5,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "quorum": [
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        0
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    ],
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "quorum_names": [
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "compute-0"
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    ],
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "quorum_age": 11,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "monmap": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "epoch": 1,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "min_mon_release_name": "reef",
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_mons": 1
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "osdmap": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "epoch": 1,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_osds": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_up_osds": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "osd_up_since": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_in_osds": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "osd_in_since": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_remapped_pgs": 0
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "pgmap": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "pgs_by_state": [],
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_pgs": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_pools": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_objects": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "data_bytes": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "bytes_used": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "bytes_avail": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "bytes_total": 0
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "fsmap": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "epoch": 1,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "by_rank": [],
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "up:standby": 0
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "mgrmap": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "available": false,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "num_standbys": 0,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "modules": [
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:            "iostat",
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:            "nfs",
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:            "restful"
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        ],
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "services": {}
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "servicemap": {
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "epoch": 1,
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:        "services": {}
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    },
Oct  1 09:08:11 np0005464214 agitated_carver[75319]:    "progress_events": {}
Oct  1 09:08:11 np0005464214 agitated_carver[75319]: }
Oct  1 09:08:12 np0005464214 systemd[1]: libpod-908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419.scope: Deactivated successfully.
Oct  1 09:08:12 np0005464214 podman[75303]: 2025-10-01 13:08:12.005605718 +0000 UTC m=+1.399070622 container died 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:08:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1-merged.mount: Deactivated successfully.
Oct  1 09:08:12 np0005464214 podman[75303]: 2025-10-01 13:08:12.044643166 +0000 UTC m=+1.438108040 container remove 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:12 np0005464214 systemd[1]: libpod-conmon-908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419.scope: Deactivated successfully.
Oct  1 09:08:12 np0005464214 ceph-mgr[75103]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 09:08:12 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'orchestrator'
Oct  1 09:08:12 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:12.088+0000 7f0e0936f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 09:08:12 np0005464214 ceph-mgr[75103]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:12 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'osd_perf_query'
Oct  1 09:08:12 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:12.701+0000 7f0e0936f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:12 np0005464214 ceph-mgr[75103]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 09:08:12 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'osd_support'
Oct  1 09:08:12 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:12.948+0000 7f0e0936f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 09:08:13 np0005464214 ceph-mgr[75103]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 09:08:13 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'pg_autoscaler'
Oct  1 09:08:13 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:13.169+0000 7f0e0936f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 09:08:13 np0005464214 ceph-mgr[75103]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 09:08:13 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'progress'
Oct  1 09:08:13 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:13.421+0000 7f0e0936f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 09:08:13 np0005464214 ceph-mgr[75103]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 09:08:13 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'prometheus'
Oct  1 09:08:13 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:13.640+0000 7f0e0936f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.131524559 +0000 UTC m=+0.057805565 container create 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:08:14 np0005464214 systemd[1]: Started libpod-conmon-12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b.scope.
Oct  1 09:08:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.111743131 +0000 UTC m=+0.038024157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.211848917 +0000 UTC m=+0.138130013 container init 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.217770185 +0000 UTC m=+0.144051211 container start 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.221885906 +0000 UTC m=+0.148166912 container attach 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860770698' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]: 
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]: {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "health": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "status": "HEALTH_OK",
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "checks": {},
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "mutes": []
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "election_epoch": 5,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "quorum": [
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        0
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    ],
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "quorum_names": [
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "compute-0"
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    ],
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "quorum_age": 14,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "monmap": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "epoch": 1,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "min_mon_release_name": "reef",
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_mons": 1
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "osdmap": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "epoch": 1,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_osds": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_up_osds": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "osd_up_since": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_in_osds": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "osd_in_since": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_remapped_pgs": 0
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "pgmap": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "pgs_by_state": [],
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_pgs": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_pools": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_objects": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "data_bytes": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "bytes_used": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "bytes_avail": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "bytes_total": 0
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "fsmap": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "epoch": 1,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "by_rank": [],
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "up:standby": 0
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "mgrmap": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "available": false,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "num_standbys": 0,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "modules": [
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:            "iostat",
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:            "nfs",
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:            "restful"
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        ],
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "services": {}
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "servicemap": {
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "epoch": 1,
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:        "services": {}
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    },
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]:    "progress_events": {}
Oct  1 09:08:14 np0005464214 vigilant_nightingale[75375]: }
Oct  1 09:08:14 np0005464214 systemd[1]: libpod-12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b.scope: Deactivated successfully.
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.58985614 +0000 UTC m=+0.516137156 container died 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:14 np0005464214 ceph-mgr[75103]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 09:08:14 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'rbd_support'
Oct  1 09:08:14 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:14.592+0000 7f0e0936f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 09:08:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e-merged.mount: Deactivated successfully.
Oct  1 09:08:14 np0005464214 podman[75359]: 2025-10-01 13:08:14.648022137 +0000 UTC m=+0.574303183 container remove 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:08:14 np0005464214 systemd[1]: libpod-conmon-12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b.scope: Deactivated successfully.
Oct  1 09:08:14 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:14.897+0000 7f0e0936f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 09:08:14 np0005464214 ceph-mgr[75103]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 09:08:14 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'restful'
Oct  1 09:08:15 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'rgw'
Oct  1 09:08:16 np0005464214 ceph-mgr[75103]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 09:08:16 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'rook'
Oct  1 09:08:16 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:16.329+0000 7f0e0936f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 09:08:16 np0005464214 podman[75413]: 2025-10-01 13:08:16.711230189 +0000 UTC m=+0.038891015 container create a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:08:16 np0005464214 systemd[1]: Started libpod-conmon-a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc.scope.
Oct  1 09:08:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:16 np0005464214 podman[75413]: 2025-10-01 13:08:16.773449602 +0000 UTC m=+0.101110538 container init a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  1 09:08:16 np0005464214 podman[75413]: 2025-10-01 13:08:16.778171123 +0000 UTC m=+0.105831919 container start a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 09:08:16 np0005464214 podman[75413]: 2025-10-01 13:08:16.7812377 +0000 UTC m=+0.108898576 container attach a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:16 np0005464214 podman[75413]: 2025-10-01 13:08:16.69268865 +0000 UTC m=+0.020349526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826315196' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]: 
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]: {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "health": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "status": "HEALTH_OK",
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "checks": {},
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "mutes": []
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "election_epoch": 5,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "quorum": [
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        0
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    ],
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "quorum_names": [
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "compute-0"
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    ],
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "quorum_age": 16,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "monmap": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "epoch": 1,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "min_mon_release_name": "reef",
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_mons": 1
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "osdmap": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "epoch": 1,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_osds": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_up_osds": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "osd_up_since": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_in_osds": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "osd_in_since": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_remapped_pgs": 0
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "pgmap": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "pgs_by_state": [],
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_pgs": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_pools": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_objects": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "data_bytes": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "bytes_used": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "bytes_avail": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "bytes_total": 0
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "fsmap": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "epoch": 1,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "by_rank": [],
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "up:standby": 0
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "mgrmap": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "available": false,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "num_standbys": 0,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "modules": [
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:            "iostat",
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:            "nfs",
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:            "restful"
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        ],
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "services": {}
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "servicemap": {
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "epoch": 1,
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:        "services": {}
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    },
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]:    "progress_events": {}
Oct  1 09:08:17 np0005464214 vibrant_kirch[75429]: }
Oct  1 09:08:17 np0005464214 systemd[1]: libpod-a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc.scope: Deactivated successfully.
Oct  1 09:08:17 np0005464214 podman[75413]: 2025-10-01 13:08:17.17077729 +0000 UTC m=+0.498438146 container died a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:08:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7-merged.mount: Deactivated successfully.
Oct  1 09:08:17 np0005464214 podman[75413]: 2025-10-01 13:08:17.215076955 +0000 UTC m=+0.542737751 container remove a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:08:17 np0005464214 systemd[1]: libpod-conmon-a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc.scope: Deactivated successfully.
Oct  1 09:08:18 np0005464214 ceph-mgr[75103]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 09:08:18 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'selftest'
Oct  1 09:08:18 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:18.370+0000 7f0e0936f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 09:08:18 np0005464214 ceph-mgr[75103]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 09:08:18 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:18.613+0000 7f0e0936f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 09:08:18 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'snap_schedule'
Oct  1 09:08:18 np0005464214 ceph-mgr[75103]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 09:08:18 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'stats'
Oct  1 09:08:18 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:18.854+0000 7f0e0936f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 09:08:19 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'status'
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.290203664 +0000 UTC m=+0.043654065 container create ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:08:19 np0005464214 systemd[1]: Started libpod-conmon-ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0.scope.
Oct  1 09:08:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:19 np0005464214 ceph-mgr[75103]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 09:08:19 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'telegraf'
Oct  1 09:08:19 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:19.356+0000 7f0e0936f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.272066389 +0000 UTC m=+0.025516760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.385541779 +0000 UTC m=+0.138992140 container init ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.391182778 +0000 UTC m=+0.144633139 container start ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.394417751 +0000 UTC m=+0.147868142 container attach ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:08:19 np0005464214 ceph-mgr[75103]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 09:08:19 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'telemetry'
Oct  1 09:08:19 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:19.594+0000 7f0e0936f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 09:08:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726755813' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]: 
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]: {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "health": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "status": "HEALTH_OK",
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "checks": {},
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "mutes": []
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "election_epoch": 5,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "quorum": [
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        0
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    ],
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "quorum_names": [
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "compute-0"
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    ],
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "quorum_age": 19,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "monmap": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "epoch": 1,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "min_mon_release_name": "reef",
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_mons": 1
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "osdmap": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "epoch": 1,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_osds": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_up_osds": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "osd_up_since": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_in_osds": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "osd_in_since": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_remapped_pgs": 0
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "pgmap": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "pgs_by_state": [],
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_pgs": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_pools": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_objects": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "data_bytes": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "bytes_used": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "bytes_avail": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "bytes_total": 0
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "fsmap": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "epoch": 1,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "by_rank": [],
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "up:standby": 0
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "mgrmap": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "available": false,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "num_standbys": 0,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "modules": [
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:            "iostat",
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:            "nfs",
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:            "restful"
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        ],
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "services": {}
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "servicemap": {
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "epoch": 1,
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:        "services": {}
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    },
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]:    "progress_events": {}
Oct  1 09:08:19 np0005464214 zealous_hellman[75486]: }
Oct  1 09:08:19 np0005464214 systemd[1]: libpod-ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0.scope: Deactivated successfully.
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.779923653 +0000 UTC m=+0.533374014 container died ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:08:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c-merged.mount: Deactivated successfully.
Oct  1 09:08:19 np0005464214 podman[75470]: 2025-10-01 13:08:19.821075639 +0000 UTC m=+0.574526000 container remove ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:08:19 np0005464214 systemd[1]: libpod-conmon-ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0.scope: Deactivated successfully.
Oct  1 09:08:20 np0005464214 ceph-mgr[75103]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 09:08:20 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'test_orchestrator'
Oct  1 09:08:20 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:20.197+0000 7f0e0936f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 09:08:20 np0005464214 ceph-mgr[75103]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:20 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'volumes'
Oct  1 09:08:20 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:20.834+0000 7f0e0936f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'zabbix'
Oct  1 09:08:21 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:21.512+0000 7f0e0936f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 09:08:21 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:21.733+0000 7f0e0936f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: ms_deliver_dispatch: unhandled message 0x5578f8d671e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.puxjpb
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr handle_mgr_map Activating!
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.puxjpb(active, starting, since 0.0388835s)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr handle_mgr_map I am now activating
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e1 all = 1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: balancer
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [balancer INFO root] Starting
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: crash
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:08:21
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [balancer INFO root] No pools available
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Manager daemon compute-0.puxjpb is now available
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: devicehealth
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: iostat
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Starting
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: nfs
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: orchestrator
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: pg_autoscaler
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: progress
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [progress INFO root] Loading...
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [progress INFO root] No stored events to load
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [progress INFO root] Loaded [] historic events
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [progress INFO root] Loaded OSDMap, ready.
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] recovery thread starting
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] starting setup
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: rbd_support
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: restful
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [restful INFO root] server_addr: :: server_port: 8003
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [restful WARNING root] server not running: no certificate configured
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: status
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: telemetry
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] PerfHandler: starting
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TaskHandler: starting
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"} v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] setup complete
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: Activating manager daemon compute-0.puxjpb
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: Manager daemon compute-0.puxjpb is now available
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct  1 09:08:21 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: volumes
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct  1 09:08:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:21 np0005464214 podman[75603]: 2025-10-01 13:08:21.889662411 +0000 UTC m=+0.045449223 container create 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:21 np0005464214 systemd[1]: Started libpod-conmon-4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd.scope.
Oct  1 09:08:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:21 np0005464214 podman[75603]: 2025-10-01 13:08:21.866407903 +0000 UTC m=+0.022194705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:21 np0005464214 podman[75603]: 2025-10-01 13:08:21.984872662 +0000 UTC m=+0.140659514 container init 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:08:21 np0005464214 podman[75603]: 2025-10-01 13:08:21.991184812 +0000 UTC m=+0.146971584 container start 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:21 np0005464214 podman[75603]: 2025-10-01 13:08:21.994630751 +0000 UTC m=+0.150417553 container attach 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:08:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246215679' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:22 np0005464214 confident_babbage[75618]: 
Oct  1 09:08:22 np0005464214 confident_babbage[75618]: {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "health": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "status": "HEALTH_OK",
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "checks": {},
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "mutes": []
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "election_epoch": 5,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "quorum": [
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        0
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    ],
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "quorum_names": [
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "compute-0"
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    ],
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "quorum_age": 22,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "monmap": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "epoch": 1,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "min_mon_release_name": "reef",
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_mons": 1
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "osdmap": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "epoch": 1,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_osds": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_up_osds": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "osd_up_since": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_in_osds": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "osd_in_since": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_remapped_pgs": 0
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "pgmap": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "pgs_by_state": [],
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_pgs": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_pools": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_objects": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "data_bytes": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "bytes_used": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "bytes_avail": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "bytes_total": 0
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "fsmap": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "epoch": 1,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "by_rank": [],
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "up:standby": 0
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "mgrmap": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "available": false,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "num_standbys": 0,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "modules": [
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:            "iostat",
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:            "nfs",
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:            "restful"
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        ],
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "services": {}
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "servicemap": {
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "epoch": 1,
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:        "services": {}
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    },
Oct  1 09:08:22 np0005464214 confident_babbage[75618]:    "progress_events": {}
Oct  1 09:08:22 np0005464214 confident_babbage[75618]: }
Oct  1 09:08:22 np0005464214 systemd[1]: libpod-4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd.scope: Deactivated successfully.
Oct  1 09:08:22 np0005464214 podman[75603]: 2025-10-01 13:08:22.374724752 +0000 UTC m=+0.530511524 container died 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72-merged.mount: Deactivated successfully.
Oct  1 09:08:22 np0005464214 podman[75603]: 2025-10-01 13:08:22.412083606 +0000 UTC m=+0.567870378 container remove 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:08:22 np0005464214 systemd[1]: libpod-conmon-4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd.scope: Deactivated successfully.
Oct  1 09:08:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.puxjpb(active, since 1.05556s)
Oct  1 09:08:22 np0005464214 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:22 np0005464214 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:22 np0005464214 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:23 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:23 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.puxjpb(active, since 2s)
Oct  1 09:08:24 np0005464214 podman[75656]: 2025-10-01 13:08:24.492023429 +0000 UTC m=+0.048542140 container create 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:24 np0005464214 systemd[1]: Started libpod-conmon-2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede.scope.
Oct  1 09:08:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:24 np0005464214 podman[75656]: 2025-10-01 13:08:24.555407001 +0000 UTC m=+0.111925702 container init 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:08:24 np0005464214 podman[75656]: 2025-10-01 13:08:24.561112722 +0000 UTC m=+0.117631463 container start 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:24 np0005464214 podman[75656]: 2025-10-01 13:08:24.565206561 +0000 UTC m=+0.121725272 container attach 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:08:24 np0005464214 podman[75656]: 2025-10-01 13:08:24.475947859 +0000 UTC m=+0.032466590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  1 09:08:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325604898' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]: 
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]: {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "health": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "status": "HEALTH_OK",
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "checks": {},
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "mutes": []
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "election_epoch": 5,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "quorum": [
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        0
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    ],
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "quorum_names": [
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "compute-0"
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    ],
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "quorum_age": 24,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "monmap": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "epoch": 1,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "min_mon_release_name": "reef",
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_mons": 1
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "osdmap": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "epoch": 1,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_osds": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_up_osds": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "osd_up_since": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_in_osds": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "osd_in_since": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_remapped_pgs": 0
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "pgmap": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "pgs_by_state": [],
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_pgs": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_pools": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_objects": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "data_bytes": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "bytes_used": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "bytes_avail": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "bytes_total": 0
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "fsmap": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "epoch": 1,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "by_rank": [],
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "up:standby": 0
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "mgrmap": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "available": true,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "num_standbys": 0,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "modules": [
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:            "iostat",
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:            "nfs",
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:            "restful"
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        ],
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "services": {}
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "servicemap": {
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "epoch": 1,
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "modified": "2025-10-01T13:07:57.318832+0000",
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:        "services": {}
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    },
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]:    "progress_events": {}
Oct  1 09:08:25 np0005464214 sleepy_torvalds[75673]: }
Oct  1 09:08:25 np0005464214 systemd[1]: libpod-2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede.scope: Deactivated successfully.
Oct  1 09:08:25 np0005464214 podman[75656]: 2025-10-01 13:08:25.144880223 +0000 UTC m=+0.701398964 container died 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:08:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6-merged.mount: Deactivated successfully.
Oct  1 09:08:25 np0005464214 podman[75656]: 2025-10-01 13:08:25.186021089 +0000 UTC m=+0.742539790 container remove 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:25 np0005464214 systemd[1]: libpod-conmon-2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede.scope: Deactivated successfully.
Oct  1 09:08:25 np0005464214 podman[75712]: 2025-10-01 13:08:25.252843119 +0000 UTC m=+0.044434370 container create 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:08:25 np0005464214 systemd[1]: Started libpod-conmon-5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97.scope.
Oct  1 09:08:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:25 np0005464214 podman[75712]: 2025-10-01 13:08:25.330316797 +0000 UTC m=+0.121908068 container init 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:25 np0005464214 podman[75712]: 2025-10-01 13:08:25.237036587 +0000 UTC m=+0.028627858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:25 np0005464214 podman[75712]: 2025-10-01 13:08:25.34019908 +0000 UTC m=+0.131790341 container start 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 09:08:25 np0005464214 podman[75712]: 2025-10-01 13:08:25.34397619 +0000 UTC m=+0.135567471 container attach 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:08:25 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  1 09:08:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3229926718' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 09:08:26 np0005464214 systemd[1]: libpod-5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97.scope: Deactivated successfully.
Oct  1 09:08:26 np0005464214 podman[75756]: 2025-10-01 13:08:26.303829259 +0000 UTC m=+0.037123355 container died 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:08:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36-merged.mount: Deactivated successfully.
Oct  1 09:08:26 np0005464214 podman[75756]: 2025-10-01 13:08:26.866237659 +0000 UTC m=+0.599531765 container remove 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:26 np0005464214 systemd[1]: libpod-conmon-5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97.scope: Deactivated successfully.
Oct  1 09:08:26 np0005464214 podman[75772]: 2025-10-01 13:08:26.960905606 +0000 UTC m=+0.062520250 container create 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:08:27 np0005464214 systemd[1]: Started libpod-conmon-8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064.scope.
Oct  1 09:08:27 np0005464214 podman[75772]: 2025-10-01 13:08:26.92490874 +0000 UTC m=+0.026523414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:27 np0005464214 podman[75772]: 2025-10-01 13:08:27.070358133 +0000 UTC m=+0.171972797 container init 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:27 np0005464214 podman[75772]: 2025-10-01 13:08:27.075848872 +0000 UTC m=+0.177463516 container start 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:27 np0005464214 podman[75772]: 2025-10-01 13:08:27.0882198 +0000 UTC m=+0.189834464 container attach 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:27 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3229926718' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 09:08:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct  1 09:08:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  1 09:08:27 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:28 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  1 09:08:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  1 09:08:28 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.puxjpb(active, since 6s)
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  1: '-n'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  2: 'mgr.compute-0.puxjpb'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  3: '-f'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  4: '--setuser'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  5: 'ceph'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  6: '--setgroup'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  7: 'ceph'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  8: '--default-log-to-file=false'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  9: '--default-log-to-journald=true'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr respawn  exe_path /proc/self/exe
Oct  1 09:08:28 np0005464214 systemd[1]: libpod-8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064.scope: Deactivated successfully.
Oct  1 09:08:28 np0005464214 podman[75816]: 2025-10-01 13:08:28.510260143 +0000 UTC m=+0.026077075 container died 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:08:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: ignoring --setuser ceph since I am not root
Oct  1 09:08:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: ignoring --setgroup ceph since I am not root
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: pidfile_write: ignore empty --pid-file
Oct  1 09:08:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787-merged.mount: Deactivated successfully.
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'alerts'
Oct  1 09:08:28 np0005464214 podman[75816]: 2025-10-01 13:08:28.684708717 +0000 UTC m=+0.200525559 container remove 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:08:28 np0005464214 systemd[1]: libpod-conmon-8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064.scope: Deactivated successfully.
Oct  1 09:08:28 np0005464214 podman[75855]: 2025-10-01 13:08:28.772042414 +0000 UTC m=+0.056049938 container create fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:08:28 np0005464214 systemd[1]: Started libpod-conmon-fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37.scope.
Oct  1 09:08:28 np0005464214 podman[75855]: 2025-10-01 13:08:28.737189218 +0000 UTC m=+0.021196762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:28 np0005464214 podman[75855]: 2025-10-01 13:08:28.870595888 +0000 UTC m=+0.154603442 container init fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:28 np0005464214 podman[75855]: 2025-10-01 13:08:28.877700727 +0000 UTC m=+0.161708291 container start fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:28 np0005464214 podman[75855]: 2025-10-01 13:08:28.900119262 +0000 UTC m=+0.184126796 container attach fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 09:08:28 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'balancer'
Oct  1 09:08:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:28.962+0000 7f14179b4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 09:08:29 np0005464214 ceph-mgr[75103]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 09:08:29 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'cephadm'
Oct  1 09:08:29 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:29.217+0000 7f14179b4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 09:08:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  1 09:08:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169176868' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  1 09:08:29 np0005464214 ecstatic_gould[75872]: {
Oct  1 09:08:29 np0005464214 ecstatic_gould[75872]:    "epoch": 5,
Oct  1 09:08:29 np0005464214 ecstatic_gould[75872]:    "available": true,
Oct  1 09:08:29 np0005464214 ecstatic_gould[75872]:    "active_name": "compute-0.puxjpb",
Oct  1 09:08:29 np0005464214 ecstatic_gould[75872]:    "num_standby": 0
Oct  1 09:08:29 np0005464214 ecstatic_gould[75872]: }
Oct  1 09:08:29 np0005464214 systemd[1]: libpod-fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37.scope: Deactivated successfully.
Oct  1 09:08:29 np0005464214 podman[75855]: 2025-10-01 13:08:29.460792857 +0000 UTC m=+0.744800391 container died fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:29 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  1 09:08:29 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8-merged.mount: Deactivated successfully.
Oct  1 09:08:29 np0005464214 podman[75855]: 2025-10-01 13:08:29.595865319 +0000 UTC m=+0.879872843 container remove fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:08:29 np0005464214 systemd[1]: libpod-conmon-fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37.scope: Deactivated successfully.
Oct  1 09:08:29 np0005464214 podman[75911]: 2025-10-01 13:08:29.735337243 +0000 UTC m=+0.110793148 container create 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:29 np0005464214 podman[75911]: 2025-10-01 13:08:29.659408212 +0000 UTC m=+0.034864217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:29 np0005464214 systemd[1]: Started libpod-conmon-6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f.scope.
Oct  1 09:08:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:29 np0005464214 podman[75911]: 2025-10-01 13:08:29.835615863 +0000 UTC m=+0.211071798 container init 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:08:29 np0005464214 podman[75911]: 2025-10-01 13:08:29.842947801 +0000 UTC m=+0.218403716 container start 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:29 np0005464214 podman[75911]: 2025-10-01 13:08:29.897395019 +0000 UTC m=+0.272850954 container attach 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:08:31 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'crash'
Oct  1 09:08:31 np0005464214 ceph-mgr[75103]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 09:08:31 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'dashboard'
Oct  1 09:08:31 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:31.423+0000 7f14179b4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 09:08:32 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'devicehealth'
Oct  1 09:08:33 np0005464214 ceph-mgr[75103]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 09:08:33 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'diskprediction_local'
Oct  1 09:08:33 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:33.199+0000 7f14179b4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 09:08:33 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  1 09:08:33 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  1 09:08:33 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]:  from numpy import show_config as show_numpy_config
Oct  1 09:08:33 np0005464214 ceph-mgr[75103]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 09:08:33 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:33.709+0000 7f14179b4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  1 09:08:33 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'influx'
Oct  1 09:08:33 np0005464214 ceph-mgr[75103]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 09:08:33 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'insights'
Oct  1 09:08:33 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:33.960+0000 7f14179b4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  1 09:08:34 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'iostat'
Oct  1 09:08:34 np0005464214 ceph-mgr[75103]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 09:08:34 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'k8sevents'
Oct  1 09:08:34 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:34.425+0000 7f14179b4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  1 09:08:36 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'localpool'
Oct  1 09:08:36 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'mds_autoscaler'
Oct  1 09:08:37 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'mirroring'
Oct  1 09:08:37 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'nfs'
Oct  1 09:08:37 np0005464214 ceph-mgr[75103]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 09:08:37 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'orchestrator'
Oct  1 09:08:37 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:37.905+0000 7f14179b4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  1 09:08:38 np0005464214 ceph-mgr[75103]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:38 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'osd_perf_query'
Oct  1 09:08:38 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:38.554+0000 7f14179b4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:38 np0005464214 ceph-mgr[75103]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 09:08:38 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:38.824+0000 7f14179b4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  1 09:08:38 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'osd_support'
Oct  1 09:08:39 np0005464214 ceph-mgr[75103]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 09:08:39 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'pg_autoscaler'
Oct  1 09:08:39 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:39.044+0000 7f14179b4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  1 09:08:39 np0005464214 ceph-mgr[75103]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 09:08:39 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'progress'
Oct  1 09:08:39 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:39.294+0000 7f14179b4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  1 09:08:39 np0005464214 ceph-mgr[75103]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 09:08:39 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'prometheus'
Oct  1 09:08:39 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:39.520+0000 7f14179b4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  1 09:08:40 np0005464214 ceph-mgr[75103]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 09:08:40 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'rbd_support'
Oct  1 09:08:40 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:40.507+0000 7f14179b4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  1 09:08:40 np0005464214 ceph-mgr[75103]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 09:08:40 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:40.799+0000 7f14179b4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  1 09:08:40 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'restful'
Oct  1 09:08:41 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'rgw'
Oct  1 09:08:42 np0005464214 ceph-mgr[75103]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 09:08:42 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:42.213+0000 7f14179b4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  1 09:08:42 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'rook'
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'selftest'
Oct  1 09:08:44 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:44.223+0000 7f14179b4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'snap_schedule'
Oct  1 09:08:44 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:44.461+0000 7f14179b4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'stats'
Oct  1 09:08:44 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:44.720+0000 7f14179b4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  1 09:08:44 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'status'
Oct  1 09:08:45 np0005464214 ceph-mgr[75103]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 09:08:45 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'telegraf'
Oct  1 09:08:45 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:45.218+0000 7f14179b4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  1 09:08:45 np0005464214 ceph-mgr[75103]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 09:08:45 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'telemetry'
Oct  1 09:08:45 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:45.453+0000 7f14179b4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  1 09:08:46 np0005464214 ceph-mgr[75103]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 09:08:46 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'test_orchestrator'
Oct  1 09:08:46 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:46.082+0000 7f14179b4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  1 09:08:46 np0005464214 ceph-mgr[75103]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:46 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'volumes'
Oct  1 09:08:46 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:46.747+0000 7f14179b4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr[py] Loading python module 'zabbix'
Oct  1 09:08:47 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:47.448+0000 7f14179b4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 09:08:47 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:47.683+0000 7f14179b4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Active manager daemon compute-0.puxjpb restarted
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.puxjpb
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: ms_deliver_dispatch: unhandled message 0x557d512dd1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr handle_mgr_map Activating!
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr handle_mgr_map I am now activating
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.puxjpb(active, starting, since 0.0140545s)
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e1 all = 1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: balancer
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Starting
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Manager daemon compute-0.puxjpb is now available
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:08:47
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] No pools available
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: Active manager daemon compute-0.puxjpb restarted
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: Activating manager daemon compute-0.puxjpb
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: Manager daemon compute-0.puxjpb is now available
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: cephadm
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: crash
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: devicehealth
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: iostat
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: nfs
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: orchestrator
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Starting
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: pg_autoscaler
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: progress
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [progress INFO root] Loading...
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [progress INFO root] No stored events to load
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [progress INFO root] Loaded [] historic events
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [progress INFO root] Loaded OSDMap, ready.
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] recovery thread starting
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] starting setup
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: rbd_support
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: restful
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [restful INFO root] server_addr: :: server_port: 8003
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [restful WARNING root] server not running: no certificate configured
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: status
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: telemetry
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] PerfHandler: starting
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TaskHandler: starting
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"} v 0) v1
Oct  1 09:08:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] setup complete
Oct  1 09:08:47 np0005464214 ceph-mgr[75103]: mgr load Constructed class from module: volumes
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:48 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.puxjpb(active, since 1.01964s)
Oct  1 09:08:48 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  1 09:08:48 np0005464214 magical_bouman[75927]: {
Oct  1 09:08:48 np0005464214 magical_bouman[75927]:    "mgrmap_epoch": 7,
Oct  1 09:08:48 np0005464214 magical_bouman[75927]:    "initialized": true
Oct  1 09:08:48 np0005464214 magical_bouman[75927]: }
Oct  1 09:08:48 np0005464214 systemd[1]: libpod-6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f.scope: Deactivated successfully.
Oct  1 09:08:48 np0005464214 podman[75911]: 2025-10-01 13:08:48.725980615 +0000 UTC m=+19.101436520 container died 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:08:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205-merged.mount: Deactivated successfully.
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: Found migration_current of "None". Setting to last migration.
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:48 np0005464214 podman[75911]: 2025-10-01 13:08:48.781227306 +0000 UTC m=+19.156683211 container remove 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:48 np0005464214 systemd[1]: libpod-conmon-6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f.scope: Deactivated successfully.
Oct  1 09:08:48 np0005464214 podman[76090]: 2025-10-01 13:08:48.851503572 +0000 UTC m=+0.047976126 container create 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:08:48 np0005464214 systemd[1]: Started libpod-conmon-1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003.scope.
Oct  1 09:08:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:48 np0005464214 podman[76090]: 2025-10-01 13:08:48.828584896 +0000 UTC m=+0.025057470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:48 np0005464214 podman[76090]: 2025-10-01 13:08:48.935591417 +0000 UTC m=+0.132063991 container init 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:08:48 np0005464214 podman[76090]: 2025-10-01 13:08:48.942149283 +0000 UTC m=+0.138621837 container start 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:48 np0005464214 podman[76090]: 2025-10-01 13:08:48.94530685 +0000 UTC m=+0.141779424 container attach 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:08:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct  1 09:08:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 09:08:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 09:08:49 np0005464214 systemd[1]: libpod-1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003.scope: Deactivated successfully.
Oct  1 09:08:49 np0005464214 podman[76090]: 2025-10-01 13:08:49.465296566 +0000 UTC m=+0.661769130 container died 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:08:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6-merged.mount: Deactivated successfully.
Oct  1 09:08:49 np0005464214 podman[76090]: 2025-10-01 13:08:49.504237469 +0000 UTC m=+0.700710023 container remove 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:08:49 np0005464214 systemd[1]: libpod-conmon-1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003.scope: Deactivated successfully.
Oct  1 09:08:49 np0005464214 podman[76147]: 2025-10-01 13:08:49.556100254 +0000 UTC m=+0.035757636 container create f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:08:49 np0005464214 systemd[1]: Started libpod-conmon-f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98.scope.
Oct  1 09:08:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:49 np0005464214 podman[76147]: 2025-10-01 13:08:49.539480352 +0000 UTC m=+0.019137764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:49 np0005464214 podman[76147]: 2025-10-01 13:08:49.638212864 +0000 UTC m=+0.117870296 container init f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:08:49 np0005464214 podman[76147]: 2025-10-01 13:08:49.643754925 +0000 UTC m=+0.123412317 container start f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:08:49 np0005464214 podman[76147]: 2025-10-01 13:08:49.646611039 +0000 UTC m=+0.126268461 container attach f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:49] ENGINE Bus STARTING
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:49] ENGINE Bus STARTING
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:49] ENGINE Serving on https://192.168.122.100:7150
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:49] ENGINE Serving on https://192.168.122.100:7150
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:49] ENGINE Client ('192.168.122.100', 38674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  1 09:08:49 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:49] ENGINE Client ('192.168.122.100', 38674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:50] ENGINE Serving on http://192.168.122.100:8765
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:50] ENGINE Serving on http://192.168.122.100:8765
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:50] ENGINE Bus STARTED
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:50] ENGINE Bus STARTED
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_user
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_config
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  1 09:08:50 np0005464214 amazing_payne[76164]: ssh user set to ceph-admin. sudo will be used
Oct  1 09:08:50 np0005464214 systemd[1]: libpod-f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98.scope: Deactivated successfully.
Oct  1 09:08:50 np0005464214 podman[76213]: 2025-10-01 13:08:50.20466048 +0000 UTC m=+0.021331358 container died f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:08:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158-merged.mount: Deactivated successfully.
Oct  1 09:08:50 np0005464214 podman[76213]: 2025-10-01 13:08:50.239710115 +0000 UTC m=+0.056380983 container remove f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:08:50 np0005464214 systemd[1]: libpod-conmon-f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98.scope: Deactivated successfully.
Oct  1 09:08:50 np0005464214 podman[76228]: 2025-10-01 13:08:50.313354495 +0000 UTC m=+0.046790214 container create dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922317 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:08:50 np0005464214 systemd[1]: Started libpod-conmon-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope.
Oct  1 09:08:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:50 np0005464214 podman[76228]: 2025-10-01 13:08:50.388120197 +0000 UTC m=+0.121555926 container init dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:08:50 np0005464214 podman[76228]: 2025-10-01 13:08:50.295676268 +0000 UTC m=+0.029111987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:50 np0005464214 podman[76228]: 2025-10-01 13:08:50.397855109 +0000 UTC m=+0.131290818 container start dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:08:50 np0005464214 podman[76228]: 2025-10-01 13:08:50.401468617 +0000 UTC m=+0.134904336 container attach dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.puxjpb(active, since 2s)
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct  1 09:08:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Set ssh private key
Oct  1 09:08:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  1 09:08:50 np0005464214 systemd[1]: libpod-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope: Deactivated successfully.
Oct  1 09:08:50 np0005464214 conmon[76245]: conmon dffdb05c80d19a001d13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope/container/memory.events
Oct  1 09:08:50 np0005464214 podman[76228]: 2025-10-01 13:08:50.964051345 +0000 UTC m=+0.697487054 container died dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d-merged.mount: Deactivated successfully.
Oct  1 09:08:51 np0005464214 podman[76228]: 2025-10-01 13:08:51.015662468 +0000 UTC m=+0.749098197 container remove dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:51 np0005464214 systemd[1]: libpod-conmon-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope: Deactivated successfully.
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.085055316 +0000 UTC m=+0.041899583 container create ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:08:51 np0005464214 systemd[1]: Started libpod-conmon-ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4.scope.
Oct  1 09:08:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.161172344 +0000 UTC m=+0.118016691 container init ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.068538277 +0000 UTC m=+0.025382564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.170620935 +0000 UTC m=+0.127465202 container start ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.174607309 +0000 UTC m=+0.131451716 container attach ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: [01/Oct/2025:13:08:49] ENGINE Bus STARTING
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: [01/Oct/2025:13:08:49] ENGINE Serving on https://192.168.122.100:7150
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: [01/Oct/2025:13:08:49] ENGINE Client ('192.168.122.100', 38674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: [01/Oct/2025:13:08:50] ENGINE Serving on http://192.168.122.100:8765
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: [01/Oct/2025:13:08:50] ENGINE Bus STARTED
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: Set ssh ssh_user
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: Set ssh ssh_config
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: ssh user set to ceph-admin. sudo will be used
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:51 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct  1 09:08:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:51 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  1 09:08:51 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  1 09:08:51 np0005464214 systemd[1]: libpod-ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4.scope: Deactivated successfully.
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.704319357 +0000 UTC m=+0.661163654 container died ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:08:51 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7-merged.mount: Deactivated successfully.
Oct  1 09:08:51 np0005464214 podman[76284]: 2025-10-01 13:08:51.753568819 +0000 UTC m=+0.710413116 container remove ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:51 np0005464214 systemd[1]: libpod-conmon-ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4.scope: Deactivated successfully.
Oct  1 09:08:51 np0005464214 podman[76336]: 2025-10-01 13:08:51.832845235 +0000 UTC m=+0.050396062 container create 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:08:51 np0005464214 systemd[1]: Started libpod-conmon-4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219.scope.
Oct  1 09:08:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:51 np0005464214 podman[76336]: 2025-10-01 13:08:51.817611313 +0000 UTC m=+0.035162160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:51 np0005464214 podman[76336]: 2025-10-01 13:08:51.919927471 +0000 UTC m=+0.137478378 container init 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:08:51 np0005464214 podman[76336]: 2025-10-01 13:08:51.928991025 +0000 UTC m=+0.146541892 container start 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:08:51 np0005464214 podman[76336]: 2025-10-01 13:08:51.932852473 +0000 UTC m=+0.150403400 container attach 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:08:52 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:08:52 np0005464214 festive_antonelli[76352]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI5xAJDkPgCIf6A0Wug1Am7fHXcOL9nUBYSVUBsn0QymGjzCb9x6M/orVCsS+sJX+rxY/wCTMF1ePsKtpvq56LE06MolWp3oieKJ9YLlvpa8DalQkzqEz7+O2HVSYRxm+qX0UaZ5TjLo3ShwHMVsALpy+Mp5QPCNCdXek22hRRix4tQQ1bSRzcONPNWVkm7cok4Oxkwg6QcPdQjwKPN0VDZn0gZb8OUjQNVaZJSIfmh3K7cGcOro6TCObnWcWwkiCs4TWUIHxB4vBHvFwRxUcV7QvAuyY52/T2cmx5XIU8RLi7enL7ADTB7WShmeglRBntpw1QYZZ6ZN/i62wO1ElM9WKUCiGJ5BMIkcJm/w/ufqyEyAPjPROX84iUoWmtYw+c6gIdg5YuRFFxBpRlOEXcC3DbSWZpQ07adU2f2HZ8jjVgSfSEe2aVAceeIsuPJFNOrFr/20LhvHNk226Ji3eM+zIGl/3mSGO7qXzNkmT1EK7NJSQTnqub98vp/1BVQI8= zuul@controller
Oct  1 09:08:52 np0005464214 systemd[1]: libpod-4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219.scope: Deactivated successfully.
Oct  1 09:08:52 np0005464214 podman[76336]: 2025-10-01 13:08:52.453163583 +0000 UTC m=+0.670714400 container died 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:08:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f-merged.mount: Deactivated successfully.
Oct  1 09:08:52 np0005464214 ceph-mon[74802]: Set ssh ssh_identity_key
Oct  1 09:08:52 np0005464214 ceph-mon[74802]: Set ssh private key
Oct  1 09:08:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:52 np0005464214 podman[76336]: 2025-10-01 13:08:52.613341827 +0000 UTC m=+0.830892654 container remove 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:08:52 np0005464214 systemd[1]: libpod-conmon-4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219.scope: Deactivated successfully.
Oct  1 09:08:52 np0005464214 podman[76390]: 2025-10-01 13:08:52.722330186 +0000 UTC m=+0.082146913 container create f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:08:52 np0005464214 podman[76390]: 2025-10-01 13:08:52.667449129 +0000 UTC m=+0.027265876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:08:52 np0005464214 systemd[1]: Started libpod-conmon-f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882.scope.
Oct  1 09:08:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:08:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:08:52 np0005464214 podman[76390]: 2025-10-01 13:08:52.837649389 +0000 UTC m=+0.197466126 container init f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:08:52 np0005464214 podman[76390]: 2025-10-01 13:08:52.848542503 +0000 UTC m=+0.208359220 container start f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:08:52 np0005464214 podman[76390]: 2025-10-01 13:08:52.864613811 +0000 UTC m=+0.224430548 container attach f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:08:53 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:08:53 np0005464214 ceph-mon[74802]: Set ssh ssh_identity_pub
Oct  1 09:08:53 np0005464214 systemd[1]: Created slice User Slice of UID 42477.
Oct  1 09:08:53 np0005464214 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  1 09:08:53 np0005464214 systemd-logind[818]: New session 21 of user ceph-admin.
Oct  1 09:08:53 np0005464214 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  1 09:08:53 np0005464214 systemd[1]: Starting User Manager for UID 42477...
Oct  1 09:08:53 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:53 np0005464214 systemd[76436]: Queued start job for default target Main User Target.
Oct  1 09:08:53 np0005464214 systemd[76436]: Created slice User Application Slice.
Oct  1 09:08:53 np0005464214 systemd[76436]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  1 09:08:53 np0005464214 systemd[76436]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 09:08:53 np0005464214 systemd[76436]: Reached target Paths.
Oct  1 09:08:53 np0005464214 systemd[76436]: Reached target Timers.
Oct  1 09:08:53 np0005464214 systemd[76436]: Starting D-Bus User Message Bus Socket...
Oct  1 09:08:53 np0005464214 systemd[76436]: Starting Create User's Volatile Files and Directories...
Oct  1 09:08:53 np0005464214 systemd-logind[818]: New session 23 of user ceph-admin.
Oct  1 09:08:53 np0005464214 systemd[76436]: Listening on D-Bus User Message Bus Socket.
Oct  1 09:08:53 np0005464214 systemd[76436]: Reached target Sockets.
Oct  1 09:08:53 np0005464214 systemd[76436]: Finished Create User's Volatile Files and Directories.
Oct  1 09:08:53 np0005464214 systemd[76436]: Reached target Basic System.
Oct  1 09:08:53 np0005464214 systemd[76436]: Reached target Main User Target.
Oct  1 09:08:53 np0005464214 systemd[76436]: Startup finished in 125ms.
Oct  1 09:08:53 np0005464214 systemd[1]: Started User Manager for UID 42477.
Oct  1 09:08:53 np0005464214 systemd[1]: Started Session 21 of User ceph-admin.
Oct  1 09:08:53 np0005464214 systemd[1]: Started Session 23 of User ceph-admin.
Oct  1 09:08:54 np0005464214 systemd-logind[818]: New session 24 of user ceph-admin.
Oct  1 09:08:54 np0005464214 systemd[1]: Started Session 24 of User ceph-admin.
Oct  1 09:08:54 np0005464214 systemd-logind[818]: New session 25 of user ceph-admin.
Oct  1 09:08:54 np0005464214 systemd[1]: Started Session 25 of User ceph-admin.
Oct  1 09:08:54 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  1 09:08:54 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  1 09:08:55 np0005464214 systemd-logind[818]: New session 26 of user ceph-admin.
Oct  1 09:08:55 np0005464214 systemd[1]: Started Session 26 of User ceph-admin.
Oct  1 09:08:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053030 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:08:55 np0005464214 systemd-logind[818]: New session 27 of user ceph-admin.
Oct  1 09:08:55 np0005464214 systemd[1]: Started Session 27 of User ceph-admin.
Oct  1 09:08:55 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:55 np0005464214 ceph-mon[74802]: Deploying cephadm binary to compute-0
Oct  1 09:08:55 np0005464214 systemd-logind[818]: New session 28 of user ceph-admin.
Oct  1 09:08:55 np0005464214 systemd[1]: Started Session 28 of User ceph-admin.
Oct  1 09:08:56 np0005464214 systemd-logind[818]: New session 29 of user ceph-admin.
Oct  1 09:08:56 np0005464214 systemd[1]: Started Session 29 of User ceph-admin.
Oct  1 09:08:56 np0005464214 systemd-logind[818]: New session 30 of user ceph-admin.
Oct  1 09:08:56 np0005464214 systemd[1]: Started Session 30 of User ceph-admin.
Oct  1 09:08:57 np0005464214 systemd-logind[818]: New session 31 of user ceph-admin.
Oct  1 09:08:57 np0005464214 systemd[1]: Started Session 31 of User ceph-admin.
Oct  1 09:08:57 np0005464214 systemd-logind[818]: New session 32 of user ceph-admin.
Oct  1 09:08:57 np0005464214 systemd[1]: Started Session 32 of User ceph-admin.
Oct  1 09:08:57 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:08:58 np0005464214 systemd-logind[818]: New session 33 of user ceph-admin.
Oct  1 09:08:58 np0005464214 systemd[1]: Started Session 33 of User ceph-admin.
Oct  1 09:08:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 09:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:08:59 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Added host compute-0
Oct  1 09:08:59 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  1 09:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 09:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 09:08:59 np0005464214 optimistic_merkle[76406]: Added host 'compute-0' with addr '192.168.122.100'
Oct  1 09:08:59 np0005464214 systemd[1]: libpod-f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882.scope: Deactivated successfully.
Oct  1 09:08:59 np0005464214 podman[77056]: 2025-10-01 13:08:59.32192559 +0000 UTC m=+0.049473171 container died f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:08:59 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:09:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827-merged.mount: Deactivated successfully.
Oct  1 09:09:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:00 np0005464214 ceph-mon[74802]: Added host compute-0
Oct  1 09:09:01 np0005464214 podman[77056]: 2025-10-01 13:09:01.141092109 +0000 UTC m=+1.868639650 container remove f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:01 np0005464214 systemd[1]: libpod-conmon-f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882.scope: Deactivated successfully.
Oct  1 09:09:01 np0005464214 podman[77189]: 2025-10-01 13:09:01.218152739 +0000 UTC m=+0.026731733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:01 np0005464214 podman[77186]: 2025-10-01 13:09:01.224449233 +0000 UTC m=+0.037913730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:01 np0005464214 podman[77189]: 2025-10-01 13:09:01.567214463 +0000 UTC m=+0.375793387 container create 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:09:01 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:09:01 np0005464214 systemd[1]: Started libpod-conmon-3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d.scope.
Oct  1 09:09:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:02 np0005464214 podman[77186]: 2025-10-01 13:09:02.111258575 +0000 UTC m=+0.924722972 container create 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:02 np0005464214 systemd[1]: Started libpod-conmon-6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74.scope.
Oct  1 09:09:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:02 np0005464214 podman[77189]: 2025-10-01 13:09:02.803270851 +0000 UTC m=+1.611849825 container init 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:02 np0005464214 podman[77189]: 2025-10-01 13:09:02.815346097 +0000 UTC m=+1.623925001 container start 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:03 np0005464214 podman[77189]: 2025-10-01 13:09:03.061368952 +0000 UTC m=+1.869947866 container attach 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:09:03 np0005464214 sharp_chaplygin[77219]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  1 09:09:03 np0005464214 systemd[1]: libpod-3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d.scope: Deactivated successfully.
Oct  1 09:09:03 np0005464214 podman[77189]: 2025-10-01 13:09:03.12090448 +0000 UTC m=+1.929483374 container died 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:09:03 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:09:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-459ffc7c18b85ba1c5c32fe0bc9ec1899c78781e2aeaafad1d495b45c7d15330-merged.mount: Deactivated successfully.
Oct  1 09:09:04 np0005464214 podman[77186]: 2025-10-01 13:09:04.130648689 +0000 UTC m=+2.944113126 container init 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:04 np0005464214 podman[77186]: 2025-10-01 13:09:04.135697499 +0000 UTC m=+2.949161916 container start 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:09:04 np0005464214 podman[77186]: 2025-10-01 13:09:04.613640357 +0000 UTC m=+3.427104844 container attach 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:04 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:09:04 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  1 09:09:04 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  1 09:09:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 09:09:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:05 np0005464214 blissful_bhaskara[77224]: Scheduled mon update...
Oct  1 09:09:05 np0005464214 systemd[1]: libpod-6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74.scope: Deactivated successfully.
Oct  1 09:09:05 np0005464214 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  1 09:09:06 np0005464214 podman[77189]: 2025-10-01 13:09:06.113086435 +0000 UTC m=+4.921665349 container remove 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:06 np0005464214 podman[77186]: 2025-10-01 13:09:06.144707789 +0000 UTC m=+4.958172186 container died 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct  1 09:09:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:06 np0005464214 ceph-mon[74802]: Saving service mon spec with placement count:5
Oct  1 09:09:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d-merged.mount: Deactivated successfully.
Oct  1 09:09:07 np0005464214 podman[77186]: 2025-10-01 13:09:07.429537767 +0000 UTC m=+6.243002164 container remove 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:07 np0005464214 podman[77389]: 2025-10-01 13:09:07.477828907 +0000 UTC m=+0.026593957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:07 np0005464214 podman[77389]: 2025-10-01 13:09:07.641699092 +0000 UTC m=+0.190464102 container create 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:07 np0005464214 ceph-mgr[75103]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  1 09:09:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  1 09:09:07 np0005464214 systemd[1]: libpod-conmon-6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74.scope: Deactivated successfully.
Oct  1 09:09:07 np0005464214 systemd[1]: Started libpod-conmon-7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76.scope.
Oct  1 09:09:07 np0005464214 systemd[1]: libpod-conmon-3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d.scope: Deactivated successfully.
Oct  1 09:09:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:08 np0005464214 podman[77389]: 2025-10-01 13:09:08.042237676 +0000 UTC m=+0.591002686 container init 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:09:08 np0005464214 podman[77389]: 2025-10-01 13:09:08.052787594 +0000 UTC m=+0.601552604 container start 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:08 np0005464214 ceph-mon[74802]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  1 09:09:08 np0005464214 podman[77389]: 2025-10-01 13:09:08.25075715 +0000 UTC m=+0.799522170 container attach 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:09:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:08 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:09:08 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  1 09:09:08 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  1 09:09:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 09:09:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:08 np0005464214 hardcore_varahamihira[77407]: Scheduled mgr update...
Oct  1 09:09:08 np0005464214 systemd[1]: libpod-7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76.scope: Deactivated successfully.
Oct  1 09:09:08 np0005464214 podman[77389]: 2025-10-01 13:09:08.747385201 +0000 UTC m=+1.296150251 container died 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94-merged.mount: Deactivated successfully.
Oct  1 09:09:09 np0005464214 podman[77389]: 2025-10-01 13:09:09.421435115 +0000 UTC m=+1.970200125 container remove 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:09 np0005464214 podman[77564]: 2025-10-01 13:09:09.519813133 +0000 UTC m=+0.080125325 container create caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:09:09 np0005464214 podman[77564]: 2025-10-01 13:09:09.460059014 +0000 UTC m=+0.020371186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:09 np0005464214 systemd[1]: Started libpod-conmon-caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2.scope.
Oct  1 09:09:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:09 np0005464214 podman[77564]: 2025-10-01 13:09:09.627150498 +0000 UTC m=+0.187462730 container init caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:09:09 np0005464214 podman[77564]: 2025-10-01 13:09:09.632613316 +0000 UTC m=+0.192925508 container start caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:09 np0005464214 podman[77564]: 2025-10-01 13:09:09.687326825 +0000 UTC m=+0.247639027 container attach caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:09:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:09 np0005464214 systemd[1]: libpod-conmon-7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76.scope: Deactivated successfully.
Oct  1 09:09:10 np0005464214 podman[77647]: 2025-10-01 13:09:10.06100228 +0000 UTC m=+0.149748731 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:10 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:09:10 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service crash spec with placement *
Oct  1 09:09:10 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  1 09:09:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  1 09:09:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:10 np0005464214 heuristic_jennings[77594]: Scheduled crash update...
Oct  1 09:09:10 np0005464214 systemd[1]: libpod-caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2.scope: Deactivated successfully.
Oct  1 09:09:10 np0005464214 podman[77564]: 2025-10-01 13:09:10.24499968 +0000 UTC m=+0.805311822 container died caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:09:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:10 np0005464214 systemd[1]: var-lib-containers-storage-overlay-df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43-merged.mount: Deactivated successfully.
Oct  1 09:09:10 np0005464214 ceph-mon[74802]: Saving service mgr spec with placement count:2
Oct  1 09:09:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:10 np0005464214 podman[77564]: 2025-10-01 13:09:10.908915384 +0000 UTC m=+1.469227576 container remove caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:09:10 np0005464214 systemd[1]: libpod-conmon-caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2.scope: Deactivated successfully.
Oct  1 09:09:11 np0005464214 podman[77707]: 2025-10-01 13:09:11.020647811 +0000 UTC m=+0.084988996 container create 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:09:11 np0005464214 podman[77707]: 2025-10-01 13:09:10.962611657 +0000 UTC m=+0.026952862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:11 np0005464214 podman[77647]: 2025-10-01 13:09:11.108355674 +0000 UTC m=+1.197102115 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:11 np0005464214 systemd[1]: Started libpod-conmon-0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec.scope.
Oct  1 09:09:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:11 np0005464214 podman[77707]: 2025-10-01 13:09:11.453692267 +0000 UTC m=+0.518033532 container init 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:11 np0005464214 podman[77707]: 2025-10-01 13:09:11.459941079 +0000 UTC m=+0.524282264 container start 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:11 np0005464214 podman[77707]: 2025-10-01 13:09:11.540706311 +0000 UTC m=+0.605047496 container attach 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:11 np0005464214 ceph-mon[74802]: Saving service crash spec with placement *
Oct  1 09:09:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct  1 09:09:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2848333521' entity='client.admin' 
Oct  1 09:09:12 np0005464214 systemd[1]: libpod-0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec.scope: Deactivated successfully.
Oct  1 09:09:12 np0005464214 podman[77707]: 2025-10-01 13:09:12.041763614 +0000 UTC m=+1.106104809 container died 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:09:12 np0005464214 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77896 (sysctl)
Oct  1 09:09:12 np0005464214 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  1 09:09:12 np0005464214 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  1 09:09:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2-merged.mount: Deactivated successfully.
Oct  1 09:09:12 np0005464214 podman[77707]: 2025-10-01 13:09:12.359053608 +0000 UTC m=+1.423394833 container remove 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:12 np0005464214 systemd[1]: libpod-conmon-0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec.scope: Deactivated successfully.
Oct  1 09:09:12 np0005464214 podman[77906]: 2025-10-01 13:09:12.409648438 +0000 UTC m=+0.025388905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:12 np0005464214 podman[77906]: 2025-10-01 13:09:12.538229528 +0000 UTC m=+0.153969945 container create fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:12 np0005464214 systemd[1]: Started libpod-conmon-fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94.scope.
Oct  1 09:09:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:12 np0005464214 podman[77906]: 2025-10-01 13:09:12.716468937 +0000 UTC m=+0.332209364 container init fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:12 np0005464214 podman[77906]: 2025-10-01 13:09:12.722833484 +0000 UTC m=+0.338573901 container start fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:09:12 np0005464214 podman[77906]: 2025-10-01 13:09:12.7915499 +0000 UTC m=+0.407290357 container attach fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:13 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2848333521' entity='client.admin' 
Oct  1 09:09:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:13 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:09:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct  1 09:09:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:13 np0005464214 systemd[1]: libpod-fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94.scope: Deactivated successfully.
Oct  1 09:09:13 np0005464214 podman[77906]: 2025-10-01 13:09:13.47721804 +0000 UTC m=+1.092958457 container died fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:09:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b-merged.mount: Deactivated successfully.
Oct  1 09:09:14 np0005464214 podman[77906]: 2025-10-01 13:09:14.037154333 +0000 UTC m=+1.652894790 container remove fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:14 np0005464214 systemd[1]: libpod-conmon-fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94.scope: Deactivated successfully.
Oct  1 09:09:14 np0005464214 podman[78203]: 2025-10-01 13:09:14.117414213 +0000 UTC m=+0.044366151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:14 np0005464214 podman[78203]: 2025-10-01 13:09:14.265008579 +0000 UTC m=+0.191960457 container create ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:14 np0005464214 systemd[1]: Started libpod-conmon-ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657.scope.
Oct  1 09:09:14 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:14 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:14 np0005464214 podman[78203]: 2025-10-01 13:09:14.661290247 +0000 UTC m=+0.588242115 container init ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:09:14 np0005464214 podman[78203]: 2025-10-01 13:09:14.671254821 +0000 UTC m=+0.598206699 container start ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:09:14 np0005464214 podman[78203]: 2025-10-01 13:09:14.798466881 +0000 UTC m=+0.725418749 container attach ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:14 np0005464214 podman[78250]: 2025-10-01 13:09:14.852253689 +0000 UTC m=+0.026038973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:14 np0005464214 podman[78250]: 2025-10-01 13:09:14.983901913 +0000 UTC m=+0.157687157 container create 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:09:15 np0005464214 systemd[1]: Started libpod-conmon-36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3.scope.
Oct  1 09:09:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:15 np0005464214 podman[78250]: 2025-10-01 13:09:15.2876791 +0000 UTC m=+0.461464394 container init 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:09:15 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:09:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 09:09:15 np0005464214 podman[78250]: 2025-10-01 13:09:15.298347314 +0000 UTC m=+0.472132568 container start 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:15 np0005464214 peaceful_chandrasekhar[78285]: 167 167
Oct  1 09:09:15 np0005464214 systemd[1]: libpod-36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3.scope: Deactivated successfully.
Oct  1 09:09:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:15 np0005464214 podman[78250]: 2025-10-01 13:09:15.423755965 +0000 UTC m=+0.597541249 container attach 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:09:15 np0005464214 podman[78250]: 2025-10-01 13:09:15.424215865 +0000 UTC m=+0.598001109 container died 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:09:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:15 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Added label _admin to host compute-0
Oct  1 09:09:15 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  1 09:09:15 np0005464214 festive_edison[78245]: Added label _admin to host compute-0
Oct  1 09:09:15 np0005464214 systemd[1]: libpod-ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657.scope: Deactivated successfully.
Oct  1 09:09:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6e6a32a66fb274c8b91170120b8411a32db8c0f36be6e257df8c4f88da4b1dee-merged.mount: Deactivated successfully.
Oct  1 09:09:16 np0005464214 podman[78250]: 2025-10-01 13:09:16.628551044 +0000 UTC m=+1.802336308 container remove 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:09:16 np0005464214 podman[78203]: 2025-10-01 13:09:16.635441393 +0000 UTC m=+2.562393261 container died ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc-merged.mount: Deactivated successfully.
Oct  1 09:09:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:17 np0005464214 ceph-mon[74802]: Added label _admin to host compute-0
Oct  1 09:09:17 np0005464214 podman[78203]: 2025-10-01 13:09:17.307506223 +0000 UTC m=+3.234458071 container remove ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:17 np0005464214 systemd[1]: libpod-conmon-36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3.scope: Deactivated successfully.
Oct  1 09:09:17 np0005464214 systemd[1]: libpod-conmon-ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657.scope: Deactivated successfully.
Oct  1 09:09:17 np0005464214 podman[78318]: 2025-10-01 13:09:17.387357887 +0000 UTC m=+0.041368415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:17 np0005464214 podman[78318]: 2025-10-01 13:09:17.527996208 +0000 UTC m=+0.182006736 container create 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:17 np0005464214 systemd[1]: Started libpod-conmon-5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a.scope.
Oct  1 09:09:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:17 np0005464214 podman[78318]: 2025-10-01 13:09:17.731359741 +0000 UTC m=+0.385370269 container init 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:17 np0005464214 podman[78318]: 2025-10-01 13:09:17.741908576 +0000 UTC m=+0.395919064 container start 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:09:17 np0005464214 podman[78318]: 2025-10-01 13:09:17.844312954 +0000 UTC m=+0.498323442 container attach 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:09:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct  1 09:09:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3755830514' entity='client.admin' 
Oct  1 09:09:18 np0005464214 systemd[1]: libpod-5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a.scope: Deactivated successfully.
Oct  1 09:09:18 np0005464214 podman[78318]: 2025-10-01 13:09:18.364969014 +0000 UTC m=+1.018979562 container died 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486-merged.mount: Deactivated successfully.
Oct  1 09:09:18 np0005464214 podman[78318]: 2025-10-01 13:09:18.563226295 +0000 UTC m=+1.217236793 container remove 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:09:18 np0005464214 systemd[1]: libpod-conmon-5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a.scope: Deactivated successfully.
Oct  1 09:09:18 np0005464214 podman[78374]: 2025-10-01 13:09:18.659424286 +0000 UTC m=+0.068840305 container create 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:09:18 np0005464214 systemd[1]: Started libpod-conmon-56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063.scope.
Oct  1 09:09:18 np0005464214 podman[78374]: 2025-10-01 13:09:18.619465458 +0000 UTC m=+0.028881547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:18 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:18 np0005464214 podman[78374]: 2025-10-01 13:09:18.755373341 +0000 UTC m=+0.164789350 container init 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:09:18 np0005464214 podman[78374]: 2025-10-01 13:09:18.765561044 +0000 UTC m=+0.174977093 container start 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:09:18 np0005464214 podman[78374]: 2025-10-01 13:09:18.786023273 +0000 UTC m=+0.195439382 container attach 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:09:19 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3755830514' entity='client.admin' 
Oct  1 09:09:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct  1 09:09:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1421745930' entity='client.admin' 
Oct  1 09:09:19 np0005464214 relaxed_pasteur[78390]: set mgr/dashboard/cluster/status
Oct  1 09:09:19 np0005464214 systemd[1]: libpod-56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063.scope: Deactivated successfully.
Oct  1 09:09:19 np0005464214 podman[78416]: 2025-10-01 13:09:19.517925495 +0000 UTC m=+0.037153070 container died 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:09:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8-merged.mount: Deactivated successfully.
Oct  1 09:09:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:19 np0005464214 podman[78416]: 2025-10-01 13:09:19.719048667 +0000 UTC m=+0.238276262 container remove 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:09:19 np0005464214 systemd[1]: libpod-conmon-56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063.scope: Deactivated successfully.
Oct  1 09:09:20 np0005464214 podman[78438]: 2025-10-01 13:09:20.001949773 +0000 UTC m=+0.084055458 container create b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:09:20 np0005464214 podman[78438]: 2025-10-01 13:09:19.945091678 +0000 UTC m=+0.027197443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:20 np0005464214 systemd[1]: Started libpod-conmon-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope.
Oct  1 09:09:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:20 np0005464214 podman[78438]: 2025-10-01 13:09:20.121875658 +0000 UTC m=+0.203981393 container init b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:20 np0005464214 podman[78438]: 2025-10-01 13:09:20.129903032 +0000 UTC m=+0.212008727 container start b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:09:20 np0005464214 podman[78438]: 2025-10-01 13:09:20.225053051 +0000 UTC m=+0.307158776 container attach b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:09:20 np0005464214 python3[78485]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:20 np0005464214 podman[78486]: 2025-10-01 13:09:20.398170604 +0000 UTC m=+0.067478072 container create 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:09:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:20 np0005464214 systemd[1]: Started libpod-conmon-066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b.scope.
Oct  1 09:09:20 np0005464214 podman[78486]: 2025-10-01 13:09:20.363850904 +0000 UTC m=+0.033158402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e1f62c187c916da046910a564c678b23a9913325fd8b771497893d3f42faf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e1f62c187c916da046910a564c678b23a9913325fd8b771497893d3f42faf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:20 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1421745930' entity='client.admin' 
Oct  1 09:09:20 np0005464214 podman[78486]: 2025-10-01 13:09:20.569043165 +0000 UTC m=+0.238350713 container init 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:09:20 np0005464214 podman[78486]: 2025-10-01 13:09:20.580466568 +0000 UTC m=+0.249774026 container start 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:09:20 np0005464214 podman[78486]: 2025-10-01 13:09:20.596908869 +0000 UTC m=+0.266216417 container attach 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/653286515' entity='client.admin' 
Oct  1 09:09:21 np0005464214 systemd[1]: libpod-066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b.scope: Deactivated successfully.
Oct  1 09:09:21 np0005464214 podman[78486]: 2025-10-01 13:09:21.176019833 +0000 UTC m=+0.845327331 container died 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:09:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-286e1f62c187c916da046910a564c678b23a9913325fd8b771497893d3f42faf-merged.mount: Deactivated successfully.
Oct  1 09:09:21 np0005464214 great_carson[78454]: [
Oct  1 09:09:21 np0005464214 great_carson[78454]:    {
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "available": false,
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "ceph_device": false,
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "lsm_data": {},
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "lvs": [],
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "path": "/dev/sr0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "rejected_reasons": [
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "Has a FileSystem",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "Insufficient space (<5GB)"
Oct  1 09:09:21 np0005464214 great_carson[78454]:        ],
Oct  1 09:09:21 np0005464214 great_carson[78454]:        "sys_api": {
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "actuators": null,
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "device_nodes": "sr0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "devname": "sr0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "human_readable_size": "482.00 KB",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "id_bus": "ata",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "model": "QEMU DVD-ROM",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "nr_requests": "2",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "parent": "/dev/sr0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "partitions": {},
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "path": "/dev/sr0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "removable": "1",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "rev": "2.5+",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "ro": "0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "rotational": "0",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "sas_address": "",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "sas_device_handle": "",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "scheduler_mode": "mq-deadline",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "sectors": 0,
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "sectorsize": "2048",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "size": 493568.0,
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "support_discard": "2048",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "type": "disk",
Oct  1 09:09:21 np0005464214 great_carson[78454]:            "vendor": "QEMU"
Oct  1 09:09:21 np0005464214 great_carson[78454]:        }
Oct  1 09:09:21 np0005464214 great_carson[78454]:    }
Oct  1 09:09:21 np0005464214 great_carson[78454]: ]
Oct  1 09:09:21 np0005464214 systemd[1]: libpod-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope: Deactivated successfully.
Oct  1 09:09:21 np0005464214 systemd[1]: libpod-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope: Consumed 1.388s CPU time.
Oct  1 09:09:21 np0005464214 podman[78486]: 2025-10-01 13:09:21.532464643 +0000 UTC m=+1.201772101 container remove 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:09:21 np0005464214 systemd[1]: libpod-conmon-066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b.scope: Deactivated successfully.
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/653286515' entity='client.admin' 
Oct  1 09:09:21 np0005464214 podman[78438]: 2025-10-01 13:09:21.548395858 +0000 UTC m=+1.630501573 container died b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:09:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633-merged.mount: Deactivated successfully.
Oct  1 09:09:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:21 np0005464214 podman[80082]: 2025-10-01 13:09:21.85323766 +0000 UTC m=+0.345547304 container remove b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:09:21 np0005464214 systemd[1]: libpod-conmon-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope: Deactivated successfully.
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:09:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:09:22 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  1 09:09:22 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  1 09:09:22 np0005464214 ansible-async_wrapper.py[80418]: Invoked with j18096652907 30 /home/zuul/.ansible/tmp/ansible-tmp-1759324161.9630404-33747-74233129533190/AnsiballZ_command.py _
Oct  1 09:09:22 np0005464214 ansible-async_wrapper.py[80474]: Starting module and watcher
Oct  1 09:09:22 np0005464214 ansible-async_wrapper.py[80474]: Start watching 80475 (30)
Oct  1 09:09:22 np0005464214 ansible-async_wrapper.py[80475]: Start module (80475)
Oct  1 09:09:22 np0005464214 ansible-async_wrapper.py[80418]: Return async_wrapper task started.
Oct  1 09:09:22 np0005464214 python3[80477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:22 np0005464214 podman[80554]: 2025-10-01 13:09:22.958582111 +0000 UTC m=+0.098062202 container create c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:09:22 np0005464214 podman[80554]: 2025-10-01 13:09:22.881097772 +0000 UTC m=+0.020577903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:09:23 np0005464214 ceph-mon[74802]: Updating compute-0:/etc/ceph/ceph.conf
Oct  1 09:09:23 np0005464214 systemd[1]: Started libpod-conmon-c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4.scope.
Oct  1 09:09:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42b8706fcf0851bfa1a1259c674872c31d794584243019a480605bed6b2994/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42b8706fcf0851bfa1a1259c674872c31d794584243019a480605bed6b2994/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:23 np0005464214 podman[80554]: 2025-10-01 13:09:23.10007228 +0000 UTC m=+0.239552401 container init c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:23 np0005464214 podman[80554]: 2025-10-01 13:09:23.106643949 +0000 UTC m=+0.246124050 container start c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:23 np0005464214 podman[80554]: 2025-10-01 13:09:23.114807068 +0000 UTC m=+0.254287169 container attach c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:09:23 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct  1 09:09:23 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct  1 09:09:23 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 09:09:23 np0005464214 tender_mirzakhani[80664]: 
Oct  1 09:09:23 np0005464214 tender_mirzakhani[80664]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 09:09:23 np0005464214 systemd[1]: libpod-c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4.scope: Deactivated successfully.
Oct  1 09:09:23 np0005464214 podman[80554]: 2025-10-01 13:09:23.644588736 +0000 UTC m=+0.784068847 container died c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bb42b8706fcf0851bfa1a1259c674872c31d794584243019a480605bed6b2994-merged.mount: Deactivated successfully.
Oct  1 09:09:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:23 np0005464214 podman[80554]: 2025-10-01 13:09:23.836235767 +0000 UTC m=+0.975715868 container remove c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:09:23 np0005464214 ansible-async_wrapper.py[80475]: Module complete (80475)
Oct  1 09:09:23 np0005464214 systemd[1]: libpod-conmon-c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4.scope: Deactivated successfully.
Oct  1 09:09:24 np0005464214 ceph-mon[74802]: Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct  1 09:09:24 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  1 09:09:24 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  1 09:09:24 np0005464214 python3[81170]: ansible-ansible.legacy.async_status Invoked with jid=j18096652907.80418 mode=status _async_dir=/root/.ansible_async
Oct  1 09:09:24 np0005464214 python3[81347]: ansible-ansible.legacy.async_status Invoked with jid=j18096652907.80418 mode=cleanup _async_dir=/root/.ansible_async
Oct  1 09:09:25 np0005464214 python3[81593]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:09:25 np0005464214 ceph-mon[74802]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  1 09:09:25 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct  1 09:09:25 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct  1 09:09:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:25 np0005464214 python3[81831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:25 np0005464214 podman[81905]: 2025-10-01 13:09:25.698965969 +0000 UTC m=+0.099937543 container create 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:25 np0005464214 podman[81905]: 2025-10-01 13:09:25.63567104 +0000 UTC m=+0.036642634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:25 np0005464214 systemd[1]: Started libpod-conmon-48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c.scope.
Oct  1 09:09:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:25 np0005464214 podman[81905]: 2025-10-01 13:09:25.826108482 +0000 UTC m=+0.227080076 container init 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:25 np0005464214 podman[81905]: 2025-10-01 13:09:25.836892024 +0000 UTC m=+0.237863608 container start 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:25 np0005464214 podman[81905]: 2025-10-01 13:09:25.854205374 +0000 UTC m=+0.255176978 container attach 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:26 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev b537f5e1-c19e-4fd8-ab75-e750d5a49393 (Updating crash deployment (+1 -> 1))
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:26 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  1 09:09:26 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  1 09:09:26 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 09:09:26 np0005464214 nice_dijkstra[81987]: 
Oct  1 09:09:26 np0005464214 nice_dijkstra[81987]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 09:09:26 np0005464214 systemd[1]: libpod-48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c.scope: Deactivated successfully.
Oct  1 09:09:26 np0005464214 podman[81905]: 2025-10-01 13:09:26.377702803 +0000 UTC m=+0.778674427 container died 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:09:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da-merged.mount: Deactivated successfully.
Oct  1 09:09:26 np0005464214 podman[81905]: 2025-10-01 13:09:26.441924851 +0000 UTC m=+0.842896435 container remove 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:26 np0005464214 systemd[1]: libpod-conmon-48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c.scope: Deactivated successfully.
Oct  1 09:09:26 np0005464214 python3[82342]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:26 np0005464214 podman[82369]: 2025-10-01 13:09:26.989269797 +0000 UTC m=+0.044657268 container create 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:09:27 np0005464214 systemd[1]: Started libpod-conmon-79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2.scope.
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.037547809 +0000 UTC m=+0.046454345 container create 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:09:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:27 np0005464214 podman[82369]: 2025-10-01 13:09:27.055442106 +0000 UTC m=+0.110829597 container init 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:09:27 np0005464214 systemd[1]: Started libpod-conmon-7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62.scope.
Oct  1 09:09:27 np0005464214 podman[82369]: 2025-10-01 13:09:27.061914042 +0000 UTC m=+0.117301513 container start 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:09:27 np0005464214 infallible_williams[82398]: 167 167
Oct  1 09:09:27 np0005464214 podman[82369]: 2025-10-01 13:09:27.064399921 +0000 UTC m=+0.119787392 container attach 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:09:27 np0005464214 podman[82369]: 2025-10-01 13:09:26.970187852 +0000 UTC m=+0.025575353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:27 np0005464214 podman[82369]: 2025-10-01 13:09:27.06561947 +0000 UTC m=+0.121006941 container died 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:27 np0005464214 systemd[1]: libpod-79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2.scope: Deactivated successfully.
Oct  1 09:09:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.08390594 +0000 UTC m=+0.092812496 container init 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d66ada92d76aee509fa6ed157ad1a8177b1ed36deb14923fdb6e851a9145a4e4-merged.mount: Deactivated successfully.
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.091056796 +0000 UTC m=+0.099963332 container start 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:09:27 np0005464214 podman[82369]: 2025-10-01 13:09:27.101752526 +0000 UTC m=+0.157139997 container remove 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.013604359 +0000 UTC m=+0.022510915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.114268763 +0000 UTC m=+0.123175299 container attach 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:27 np0005464214 systemd[1]: libpod-conmon-79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2.scope: Deactivated successfully.
Oct  1 09:09:27 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:27 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:27 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: Deploying daemon crash.compute-0 on compute-0
Oct  1 09:09:27 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:27 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:27 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct  1 09:09:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/197864987' entity='client.admin' 
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.637355399 +0000 UTC m=+0.646261985 container died 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:27 np0005464214 ansible-async_wrapper.py[80474]: Done in kid B.
Oct  1 09:09:27 np0005464214 systemd[1]: libpod-7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62.scope: Deactivated successfully.
Oct  1 09:09:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e-merged.mount: Deactivated successfully.
Oct  1 09:09:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:27 np0005464214 systemd[1]: Starting Ceph crash.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:09:27 np0005464214 podman[82383]: 2025-10-01 13:09:27.733746648 +0000 UTC m=+0.742653184 container remove 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:09:27 np0005464214 systemd[1]: libpod-conmon-7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62.scope: Deactivated successfully.
Oct  1 09:09:28 np0005464214 podman[82602]: 2025-10-01 13:09:28.000697177 +0000 UTC m=+0.057530976 container create 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 podman[82602]: 2025-10-01 13:09:27.984632388 +0000 UTC m=+0.041466187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:28 np0005464214 podman[82602]: 2025-10-01 13:09:28.080209041 +0000 UTC m=+0.137042860 container init 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:28 np0005464214 podman[82602]: 2025-10-01 13:09:28.086068227 +0000 UTC m=+0.142902026 container start 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:09:28 np0005464214 bash[82602]: 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229
Oct  1 09:09:28 np0005464214 systemd[1]: Started Ceph crash.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev b537f5e1-c19e-4fd8-ab75-e750d5a49393 (Updating crash deployment (+1 -> 1))
Oct  1 09:09:28 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event b537f5e1-c19e-4fd8-ab75-e750d5a49393 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a73487e5-39de-4df2-b136-c7a6912a3a4b does not exist
Oct  1 09:09:28 np0005464214 python3[82610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 1f0e416d-5877-454d-9b58-832a4a0a9061 (Updating mgr deployment (+1 -> 2))
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:28 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.hktmnz on compute-0
Oct  1 09:09:28 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.hktmnz on compute-0
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.223008241 +0000 UTC m=+0.041937711 container create 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:09:28 np0005464214 systemd[1]: Started libpod-conmon-3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5.scope.
Oct  1 09:09:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.30112292 +0000 UTC m=+0.120052410 container init 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.208429669 +0000 UTC m=+0.027359159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.309803695 +0000 UTC m=+0.128733165 container start 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.313373889 +0000 UTC m=+0.132303379 container attach 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.474+0000 7f10d4f50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.474+0000 7f10d4f50640 -1 AuthRegistry(0x7f10d0066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.476+0000 7f10d4f50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.476+0000 7f10d4f50640 -1 AuthRegistry(0x7f10d4f4f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.477+0000 7f10ce575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.477+0000 7f10d4f50640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  1 09:09:28 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/197864987' entity='client.admin' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: Deploying daemon mgr.compute-0.hktmnz on compute-0
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.719773973 +0000 UTC m=+0.035415435 container create 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 09:09:28 np0005464214 systemd[1]: Started libpod-conmon-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope.
Oct  1 09:09:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.784429595 +0000 UTC m=+0.100071087 container init 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.794480583 +0000 UTC m=+0.110122035 container start 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:28 np0005464214 angry_chatterjee[82831]: 167 167
Oct  1 09:09:28 np0005464214 systemd[1]: libpod-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope: Deactivated successfully.
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.797957324 +0000 UTC m=+0.113598816 container attach 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:09:28 np0005464214 conmon[82831]: conmon 472a2c2ff36b238ab2db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope/container/memory.events
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.799478102 +0000 UTC m=+0.115119604 container died 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.704085825 +0000 UTC m=+0.019727307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct  1 09:09:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e5f9348c047441dcefd72c59677954f52992c61bee3293388ccba0c8b726b579-merged.mount: Deactivated successfully.
Oct  1 09:09:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4066853649' entity='client.admin' 
Oct  1 09:09:28 np0005464214 podman[82814]: 2025-10-01 13:09:28.844948155 +0000 UTC m=+0.160589617 container remove 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:28 np0005464214 systemd[1]: libpod-3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5.scope: Deactivated successfully.
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.853576509 +0000 UTC m=+0.672506379 container died 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:28 np0005464214 systemd[1]: libpod-conmon-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope: Deactivated successfully.
Oct  1 09:09:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42-merged.mount: Deactivated successfully.
Oct  1 09:09:28 np0005464214 podman[82624]: 2025-10-01 13:09:28.899914798 +0000 UTC m=+0.718844268 container remove 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:09:28 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:28 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:28 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:29 np0005464214 systemd[1]: libpod-conmon-3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5.scope: Deactivated successfully.
Oct  1 09:09:29 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:29 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:29 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:29 np0005464214 python3[82931]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:29 np0005464214 podman[82966]: 2025-10-01 13:09:29.382192291 +0000 UTC m=+0.046352371 container create 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:09:29 np0005464214 systemd[1]: Started libpod-conmon-42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26.scope.
Oct  1 09:09:29 np0005464214 systemd[1]: Starting Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:09:29 np0005464214 podman[82966]: 2025-10-01 13:09:29.360228624 +0000 UTC m=+0.024388704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 podman[82966]: 2025-10-01 13:09:29.47955211 +0000 UTC m=+0.143712210 container init 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:29 np0005464214 podman[82966]: 2025-10-01 13:09:29.485792648 +0000 UTC m=+0.149952728 container start 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:09:29 np0005464214 podman[82966]: 2025-10-01 13:09:29.489146505 +0000 UTC m=+0.153306595 container attach 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:29 np0005464214 podman[83035]: 2025-10-01 13:09:29.671571633 +0000 UTC m=+0.046645212 container create 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/var/lib/ceph/mgr/ceph-compute-0.hktmnz supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:29 np0005464214 podman[83035]: 2025-10-01 13:09:29.729571423 +0000 UTC m=+0.104645052 container init 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:29 np0005464214 podman[83035]: 2025-10-01 13:09:29.740641044 +0000 UTC m=+0.115714623 container start 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:29 np0005464214 bash[83035]: 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b
Oct  1 09:09:29 np0005464214 podman[83035]: 2025-10-01 13:09:29.65100776 +0000 UTC m=+0.026081379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:29 np0005464214 systemd[1]: Started Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:09:29 np0005464214 ceph-mgr[83054]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:09:29 np0005464214 ceph-mgr[83054]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  1 09:09:29 np0005464214 ceph-mgr[83054]: pidfile_write: ignore empty --pid-file
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 1f0e416d-5877-454d-9b58-832a4a0a9061 (Updating mgr deployment (+1 -> 2))
Oct  1 09:09:29 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 1f0e416d-5877-454d-9b58-832a4a0a9061 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/4066853649' entity='client.admin' 
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:29 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'alerts'
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  1 09:09:30 np0005464214 ceph-mgr[83054]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 09:09:30 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'balancer'
Oct  1 09:09:30 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:30.231+0000 7f903defc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:30 np0005464214 ceph-mgr[83054]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 09:09:30 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'cephadm'
Oct  1 09:09:30 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:30.471+0000 7f903defc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  1 09:09:30 np0005464214 podman[83322]: 2025-10-01 13:09:30.585111918 +0000 UTC m=+0.053532210 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:30 np0005464214 podman[83322]: 2025-10-01 13:09:30.676096225 +0000 UTC m=+0.144516497 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  1 09:09:30 np0005464214 distracted_gates[82983]: set require_min_compat_client to mimic
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  1 09:09:30 np0005464214 systemd[1]: libpod-42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26.scope: Deactivated successfully.
Oct  1 09:09:30 np0005464214 podman[82966]: 2025-10-01 13:09:30.86586142 +0000 UTC m=+1.530021500 container died 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:09:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942-merged.mount: Deactivated successfully.
Oct  1 09:09:30 np0005464214 podman[82966]: 2025-10-01 13:09:30.911132069 +0000 UTC m=+1.575292149 container remove 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:09:30 np0005464214 systemd[1]: libpod-conmon-42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26.scope: Deactivated successfully.
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:09:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a3c69b1d-7c82-4e91-9c1a-39d73b79f7d4 does not exist
Oct  1 09:09:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b9c83e13-d6c6-4df6-95a8-2a52343164a5 does not exist
Oct  1 09:09:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7bc16f79-54bb-4aa6-a2ab-1c2ea3b1ff75 does not exist
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.565300885 +0000 UTC m=+0.037430699 container create 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:09:31 np0005464214 systemd[1]: Started libpod-conmon-5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f.scope.
Oct  1 09:09:31 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.635557774 +0000 UTC m=+0.107687618 container init 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.641587203 +0000 UTC m=+0.113717017 container start 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.644779042 +0000 UTC m=+0.116908896 container attach 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.549275117 +0000 UTC m=+0.021404961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:31 np0005464214 interesting_mayer[83630]: 167 167
Oct  1 09:09:31 np0005464214 systemd[1]: libpod-5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f.scope: Deactivated successfully.
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.649543815 +0000 UTC m=+0.121673659 container died 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:09:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-27e6ca2310d38829e568d5263231d171e30350151f041fd588b4d68178864dcf-merged.mount: Deactivated successfully.
Oct  1 09:09:31 np0005464214 podman[83612]: 2025-10-01 13:09:31.691859251 +0000 UTC m=+0.163989095 container remove 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:09:31 np0005464214 python3[83620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:31 np0005464214 systemd[1]: libpod-conmon-5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f.scope: Deactivated successfully.
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.puxjpb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.puxjpb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.puxjpb (unknown last config time)...
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.puxjpb (unknown last config time)...
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.puxjpb on compute-0
Oct  1 09:09:31 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.puxjpb on compute-0
Oct  1 09:09:31 np0005464214 podman[83647]: 2025-10-01 13:09:31.761235154 +0000 UTC m=+0.050550307 container create 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:09:31 np0005464214 systemd[1]: Started libpod-conmon-34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e.scope.
Oct  1 09:09:31 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:31 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:31 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:31 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:31 np0005464214 podman[83647]: 2025-10-01 13:09:31.831889894 +0000 UTC m=+0.121205067 container init 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:09:31 np0005464214 podman[83647]: 2025-10-01 13:09:31.837557063 +0000 UTC m=+0.126872206 container start 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:31 np0005464214 podman[83647]: 2025-10-01 13:09:31.743982301 +0000 UTC m=+0.033297454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:31 np0005464214 podman[83647]: 2025-10-01 13:09:31.840847695 +0000 UTC m=+0.130162868 container attach 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.puxjpb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.263797394 +0000 UTC m=+0.056482203 container create 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:09:32 np0005464214 systemd[1]: Started libpod-conmon-2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768.scope.
Oct  1 09:09:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.244487873 +0000 UTC m=+0.037172702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.345282677 +0000 UTC m=+0.137967506 container init 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.356323727 +0000 UTC m=+0.149008526 container start 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.359447894 +0000 UTC m=+0.152132693 container attach 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  1 09:09:32 np0005464214 priceless_curie[83831]: 167 167
Oct  1 09:09:32 np0005464214 systemd[1]: libpod-2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768.scope: Deactivated successfully.
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.363988731 +0000 UTC m=+0.156673560 container died 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:09:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4162976ac10eefd9141540c45f7893b7e53bd2318c510cc31b605f5b20ffd9f8-merged.mount: Deactivated successfully.
Oct  1 09:09:32 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'crash'
Oct  1 09:09:32 np0005464214 podman[83815]: 2025-10-01 13:09:32.417212932 +0000 UTC m=+0.209897741 container remove 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:32 np0005464214 systemd[1]: libpod-conmon-2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768.scope: Deactivated successfully.
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mgr[83054]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 09:09:32 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'dashboard'
Oct  1 09:09:32 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:32.676+0000 7f903defc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: [progress INFO root] Writing back 2 completed events
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Added host compute-0
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct  1 09:09:32 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct  1 09:09:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:32 np0005464214 modest_panini[83687]: Added host 'compute-0' with addr '192.168.122.100'
Oct  1 09:09:32 np0005464214 modest_panini[83687]: Scheduled mon update...
Oct  1 09:09:32 np0005464214 modest_panini[83687]: Scheduled mgr update...
Oct  1 09:09:32 np0005464214 modest_panini[83687]: Scheduled osd.default_drive_group update...
Oct  1 09:09:32 np0005464214 systemd[1]: libpod-34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e.scope: Deactivated successfully.
Oct  1 09:09:32 np0005464214 podman[83647]: 2025-10-01 13:09:32.966313256 +0000 UTC m=+1.255628399 container died 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d-merged.mount: Deactivated successfully.
Oct  1 09:09:33 np0005464214 podman[83647]: 2025-10-01 13:09:33.025465703 +0000 UTC m=+1.314780886 container remove 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:09:33 np0005464214 systemd[1]: libpod-conmon-34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e.scope: Deactivated successfully.
Oct  1 09:09:33 np0005464214 podman[84154]: 2025-10-01 13:09:33.244321033 +0000 UTC m=+0.063994223 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:09:33 np0005464214 podman[84154]: 2025-10-01 13:09:33.355125398 +0000 UTC m=+0.174798568 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: Reconfiguring mgr.compute-0.puxjpb (unknown last config time)...
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: Reconfiguring daemon mgr.compute-0.puxjpb on compute-0
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 python3[84216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:09:33 np0005464214 podman[84249]: 2025-10-01 13:09:33.589763762 +0000 UTC m=+0.045636330 container create a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:33 np0005464214 systemd[1]: Started libpod-conmon-a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b.scope.
Oct  1 09:09:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:33 np0005464214 podman[84249]: 2025-10-01 13:09:33.569897825 +0000 UTC m=+0.025770413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:09:33 np0005464214 podman[84249]: 2025-10-01 13:09:33.663977591 +0000 UTC m=+0.119850159 container init a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 podman[84249]: 2025-10-01 13:09:33.67786259 +0000 UTC m=+0.133735148 container start a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 podman[84249]: 2025-10-01 13:09:33.685792992 +0000 UTC m=+0.141665560 container attach a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 88a29694-f9e2-449a-beb3-f5af6db07171 does not exist
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  1 09:09:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:33 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 8bc42142-0e21-440d-84c2-e86a31779c5d (Updating mgr deployment (-1 -> 1))
Oct  1 09:09:33 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.hktmnz from compute-0 -- ports [8765]
Oct  1 09:09:33 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.hktmnz from compute-0 -- ports [8765]
Oct  1 09:09:34 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'devicehealth'
Oct  1 09:09:34 np0005464214 systemd[1]: Stopping Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701032644' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 09:09:34 np0005464214 thirsty_snyder[84283]: 
Oct  1 09:09:34 np0005464214 thirsty_snyder[84283]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":93,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-01T13:07:57.318832+0000","services":{}},"progress_events":{}}
Oct  1 09:09:34 np0005464214 systemd[1]: libpod-a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b.scope: Deactivated successfully.
Oct  1 09:09:34 np0005464214 podman[84249]: 2025-10-01 13:09:34.277285163 +0000 UTC m=+0.733157741 container died a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b-merged.mount: Deactivated successfully.
Oct  1 09:09:34 np0005464214 podman[84249]: 2025-10-01 13:09:34.341030559 +0000 UTC m=+0.796903107 container remove a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:34 np0005464214 systemd[1]: libpod-conmon-a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b.scope: Deactivated successfully.
Oct  1 09:09:34 np0005464214 ceph-mgr[83054]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 09:09:34 np0005464214 ceph-mgr[83054]: mgr[py] Loading python module 'diskprediction_local'
Oct  1 09:09:34 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:34.398+0000 7f903defc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  1 09:09:34 np0005464214 podman[84491]: 2025-10-01 13:09:34.452215064 +0000 UTC m=+0.068647934 container died 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda-merged.mount: Deactivated successfully.
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: Added host compute-0
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: Saving service mon spec with placement compute-0
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: Saving service mgr spec with placement compute-0
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: Saving service osd.default_drive_group spec with placement compute-0
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 podman[84491]: 2025-10-01 13:09:34.496051011 +0000 UTC m=+0.112483881 container remove 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:34 np0005464214 bash[84491]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz
Oct  1 09:09:34 np0005464214 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.hktmnz.service: Main process exited, code=exited, status=143/n/a
Oct  1 09:09:34 np0005464214 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.hktmnz.service: Failed with result 'exit-code'.
Oct  1 09:09:34 np0005464214 systemd[1]: Stopped Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:09:34 np0005464214 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.hktmnz.service: Consumed 5.489s CPU time.
Oct  1 09:09:34 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:34 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:34 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:34 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.hktmnz
Oct  1 09:09:34 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.hktmnz
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"} v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]: dispatch
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]': finished
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 8bc42142-0e21-440d-84c2-e86a31779c5d (Updating mgr deployment (-1 -> 1))
Oct  1 09:09:34 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 8bc42142-0e21-440d-84c2-e86a31779c5d (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:34 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2ca47551-44fa-4e84-b634-1fbc4d606c4e does not exist
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.479575876 +0000 UTC m=+0.039034565 container create 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: Removing daemon mgr.compute-0.hktmnz from compute-0 -- ports [8765]
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]: dispatch
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]': finished
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:09:35 np0005464214 systemd[1]: Started libpod-conmon-0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f.scope.
Oct  1 09:09:35 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.557856889 +0000 UTC m=+0.117315618 container init 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.463017562 +0000 UTC m=+0.022476281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.565272826 +0000 UTC m=+0.124731535 container start 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.569775553 +0000 UTC m=+0.129234242 container attach 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:09:35 np0005464214 practical_rosalind[84743]: 167 167
Oct  1 09:09:35 np0005464214 systemd[1]: libpod-0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f.scope: Deactivated successfully.
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.571273804 +0000 UTC m=+0.130732513 container died 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bb640248e9072b01e437cc7ec14e54854062d3891eca68e101f047b01c1fdca4-merged.mount: Deactivated successfully.
Oct  1 09:09:35 np0005464214 podman[84727]: 2025-10-01 13:09:35.615162795 +0000 UTC m=+0.174621484 container remove 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:09:35 np0005464214 systemd[1]: libpod-conmon-0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f.scope: Deactivated successfully.
Oct  1 09:09:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:35 np0005464214 podman[84766]: 2025-10-01 13:09:35.794665054 +0000 UTC m=+0.039668693 container create f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:09:35 np0005464214 systemd[1]: Started libpod-conmon-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope.
Oct  1 09:09:35 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:35 np0005464214 podman[84766]: 2025-10-01 13:09:35.777038469 +0000 UTC m=+0.022042128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:35 np0005464214 podman[84766]: 2025-10-01 13:09:35.883247865 +0000 UTC m=+0.128251514 container init f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:35 np0005464214 podman[84766]: 2025-10-01 13:09:35.891099945 +0000 UTC m=+0.136103584 container start f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:35 np0005464214 podman[84766]: 2025-10-01 13:09:35.894976964 +0000 UTC m=+0.139980633 container attach f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:09:36 np0005464214 ceph-mon[74802]: Removing key for mgr.compute-0.hktmnz
Oct  1 09:09:36 np0005464214 vibrant_antonelli[84782]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:09:36 np0005464214 vibrant_antonelli[84782]: --> relative data size: 1.0
Oct  1 09:09:36 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"} v 0) v1
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]: dispatch
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]': finished
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:37 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]: dispatch
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]': finished
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 09:09:37 np0005464214 lvm[84844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 09:09:37 np0005464214 lvm[84844]: VG ceph_vg0 finished
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:37 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  1 09:09:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:37 np0005464214 ceph-mgr[75103]: [progress INFO root] Writing back 3 completed events
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 09:09:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256121332' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  1 09:09:38 np0005464214 vibrant_antonelli[84782]: stderr: got monmap epoch 1
Oct  1 09:09:38 np0005464214 vibrant_antonelli[84782]: --> Creating keyring file for osd.0
Oct  1 09:09:38 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  1 09:09:38 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  1 09:09:38 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982 --setuser ceph --setgroup ceph
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  1 09:09:38 np0005464214 ceph-mon[74802]: Cluster is now healthy
Oct  1 09:09:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 09:09:40 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f5852bc7-e830-489a-b8a9-42dfbbe71426
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"} v 0) v1
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]: dispatch
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]': finished
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:41 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:09:41 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 09:09:41 np0005464214 lvm[85787]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 09:09:41 np0005464214 lvm[85787]: VG ceph_vg1 finished
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  1 09:09:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018595264' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: stderr: got monmap epoch 1
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]: dispatch
Oct  1 09:09:41 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]': finished
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: --> Creating keyring file for osd.1
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  1 09:09:41 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f5852bc7-e830-489a-b8a9-42dfbbe71426 --setuser ceph --setgroup ceph
Oct  1 09:09:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:41.913+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:41.913+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:41.913+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:41.914+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c4c937e2-a8a8-47c3-af37-fdedb6fff1f9
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"} v 0) v1
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]: dispatch
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]': finished
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:09:44 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:09:44 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:09:44 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]: dispatch
Oct  1 09:09:44 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]': finished
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  1 09:09:44 np0005464214 lvm[86730]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 09:09:44 np0005464214 lvm[86730]: VG ceph_vg2 finished
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 09:09:44 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  1 09:09:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  1 09:09:45 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446363946' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  1 09:09:45 np0005464214 vibrant_antonelli[84782]: stderr: got monmap epoch 1
Oct  1 09:09:45 np0005464214 vibrant_antonelli[84782]: --> Creating keyring file for osd.2
Oct  1 09:09:45 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  1 09:09:45 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  1 09:09:45 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid c4c937e2-a8a8-47c3-af37-fdedb6fff1f9 --setuser ceph --setgroup ceph
Oct  1 09:09:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:09:47
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] No pools available
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:09:47 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:45.454+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:47 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:45.455+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:47 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:45.455+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  1 09:09:47 np0005464214 vibrant_antonelli[84782]: stderr: 2025-10-01T13:09:45.455+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  1 09:09:47 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  1 09:09:48 np0005464214 vibrant_antonelli[84782]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct  1 09:09:48 np0005464214 systemd[1]: libpod-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope: Deactivated successfully.
Oct  1 09:09:48 np0005464214 systemd[1]: libpod-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope: Consumed 6.269s CPU time.
Oct  1 09:09:48 np0005464214 podman[84766]: 2025-10-01 13:09:48.121474394 +0000 UTC m=+12.366478073 container died f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:09:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763-merged.mount: Deactivated successfully.
Oct  1 09:09:48 np0005464214 podman[84766]: 2025-10-01 13:09:48.189385737 +0000 UTC m=+12.434389376 container remove f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:48 np0005464214 systemd[1]: libpod-conmon-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope: Deactivated successfully.
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.838905844 +0000 UTC m=+0.035472935 container create bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:48 np0005464214 systemd[1]: Started libpod-conmon-bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a.scope.
Oct  1 09:09:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.821064023 +0000 UTC m=+0.017631124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.926264841 +0000 UTC m=+0.122832012 container init bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.932543827 +0000 UTC m=+0.129110908 container start bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.936010864 +0000 UTC m=+0.132577975 container attach bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:48 np0005464214 strange_bell[87817]: 167 167
Oct  1 09:09:48 np0005464214 systemd[1]: libpod-bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a.scope: Deactivated successfully.
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.93695002 +0000 UTC m=+0.133517101 container died bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:09:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-354f7ca130eb7c07045434c5e04d3eaa1a54a108cd211521813017db378a3269-merged.mount: Deactivated successfully.
Oct  1 09:09:48 np0005464214 podman[87800]: 2025-10-01 13:09:48.981075106 +0000 UTC m=+0.177642217 container remove bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:48 np0005464214 systemd[1]: libpod-conmon-bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a.scope: Deactivated successfully.
Oct  1 09:09:49 np0005464214 podman[87841]: 2025-10-01 13:09:49.192328905 +0000 UTC m=+0.061179135 container create 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:09:49 np0005464214 systemd[1]: Started libpod-conmon-3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de.scope.
Oct  1 09:09:49 np0005464214 podman[87841]: 2025-10-01 13:09:49.16861588 +0000 UTC m=+0.037466200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:49 np0005464214 podman[87841]: 2025-10-01 13:09:49.280207026 +0000 UTC m=+0.149057266 container init 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:09:49 np0005464214 podman[87841]: 2025-10-01 13:09:49.290361632 +0000 UTC m=+0.159211912 container start 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:49 np0005464214 podman[87841]: 2025-10-01 13:09:49.295033462 +0000 UTC m=+0.163883732 container attach 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]: {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:    "0": [
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:        {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "devices": [
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "/dev/loop3"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            ],
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_name": "ceph_lv0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_size": "21470642176",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "name": "ceph_lv0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "tags": {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cluster_name": "ceph",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.crush_device_class": "",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.encrypted": "0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osd_id": "0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.type": "block",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.vdo": "0"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            },
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "type": "block",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "vg_name": "ceph_vg0"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:        }
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:    ],
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:    "1": [
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:        {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "devices": [
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "/dev/loop4"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            ],
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_name": "ceph_lv1",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_size": "21470642176",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "name": "ceph_lv1",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "tags": {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cluster_name": "ceph",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.crush_device_class": "",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.encrypted": "0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osd_id": "1",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.type": "block",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.vdo": "0"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            },
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "type": "block",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "vg_name": "ceph_vg1"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:        }
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:    ],
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:    "2": [
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:        {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "devices": [
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "/dev/loop5"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            ],
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_name": "ceph_lv2",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_size": "21470642176",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "name": "ceph_lv2",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "tags": {
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.cluster_name": "ceph",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.crush_device_class": "",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.encrypted": "0",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osd_id": "2",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.type": "block",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:                "ceph.vdo": "0"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            },
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "type": "block",
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:            "vg_name": "ceph_vg2"
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:        }
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]:    ]
Oct  1 09:09:50 np0005464214 affectionate_mccarthy[87857]: }
Oct  1 09:09:50 np0005464214 systemd[1]: libpod-3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de.scope: Deactivated successfully.
Oct  1 09:09:50 np0005464214 podman[87866]: 2025-10-01 13:09:50.078372188 +0000 UTC m=+0.020545516 container died 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:09:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579-merged.mount: Deactivated successfully.
Oct  1 09:09:50 np0005464214 podman[87866]: 2025-10-01 13:09:50.133679998 +0000 UTC m=+0.075853306 container remove 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:50 np0005464214 systemd[1]: libpod-conmon-3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de.scope: Deactivated successfully.
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:50 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct  1 09:09:50 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.80017817 +0000 UTC m=+0.039478147 container create ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:09:50 np0005464214 systemd[1]: Started libpod-conmon-ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434.scope.
Oct  1 09:09:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.856395304 +0000 UTC m=+0.095695301 container init ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  1 09:09:50 np0005464214 ceph-mon[74802]: Deploying daemon osd.0 on compute-0
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.863608957 +0000 UTC m=+0.102908924 container start ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.867213958 +0000 UTC m=+0.106513925 container attach ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:50 np0005464214 sharp_banzai[88038]: 167 167
Oct  1 09:09:50 np0005464214 systemd[1]: libpod-ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434.scope: Deactivated successfully.
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.868412291 +0000 UTC m=+0.107712288 container died ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.782637528 +0000 UTC m=+0.021937535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-70362e1f5e6ee1f42638f140bce7cdc40cfffacce546b5ae8c2b67dce3069f24-merged.mount: Deactivated successfully.
Oct  1 09:09:50 np0005464214 podman[88022]: 2025-10-01 13:09:50.908444673 +0000 UTC m=+0.147744640 container remove ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:09:50 np0005464214 systemd[1]: libpod-conmon-ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434.scope: Deactivated successfully.
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.161566504 +0000 UTC m=+0.050339812 container create a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:51 np0005464214 systemd[1]: Started libpod-conmon-a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8.scope.
Oct  1 09:09:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.23781368 +0000 UTC m=+0.126587008 container init a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.147074318 +0000 UTC m=+0.035847646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.243849809 +0000 UTC m=+0.132623117 container start a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.247120001 +0000 UTC m=+0.135893339 container attach a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:09:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:51 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test[88088]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  1 09:09:51 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test[88088]:                            [--no-systemd] [--no-tmpfs]
Oct  1 09:09:51 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test[88088]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  1 09:09:51 np0005464214 systemd[1]: libpod-a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8.scope: Deactivated successfully.
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.871950226 +0000 UTC m=+0.760723614 container died a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:09:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f-merged.mount: Deactivated successfully.
Oct  1 09:09:51 np0005464214 podman[88072]: 2025-10-01 13:09:51.93097889 +0000 UTC m=+0.819752198 container remove a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:09:51 np0005464214 systemd[1]: libpod-conmon-a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8.scope: Deactivated successfully.
Oct  1 09:09:52 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:52 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:52 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:52 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:52 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:52 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:52 np0005464214 systemd[1]: Starting Ceph osd.0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:09:52 np0005464214 podman[88242]: 2025-10-01 13:09:52.911153 +0000 UTC m=+0.047093801 container create 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:09:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:52 np0005464214 podman[88242]: 2025-10-01 13:09:52.885852471 +0000 UTC m=+0.021793252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:52 np0005464214 podman[88242]: 2025-10-01 13:09:52.998520307 +0000 UTC m=+0.134461148 container init 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:53 np0005464214 podman[88242]: 2025-10-01 13:09:53.006986574 +0000 UTC m=+0.142927335 container start 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:53 np0005464214 podman[88242]: 2025-10-01 13:09:53.010172424 +0000 UTC m=+0.146113185 container attach 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 09:09:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:53 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 09:09:53 np0005464214 bash[88242]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 09:09:53 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 09:09:53 np0005464214 bash[88242]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 09:09:54 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 09:09:54 np0005464214 bash[88242]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  1 09:09:54 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 09:09:54 np0005464214 bash[88242]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  1 09:09:54 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:54 np0005464214 bash[88242]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:54 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 09:09:54 np0005464214 bash[88242]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  1 09:09:54 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: --> ceph-volume raw activate successful for osd ID: 0
Oct  1 09:09:54 np0005464214 bash[88242]: --> ceph-volume raw activate successful for osd ID: 0
Oct  1 09:09:54 np0005464214 systemd[1]: libpod-198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8.scope: Deactivated successfully.
Oct  1 09:09:54 np0005464214 podman[88242]: 2025-10-01 13:09:54.080770267 +0000 UTC m=+1.216711028 container died 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:09:54 np0005464214 systemd[1]: libpod-198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8.scope: Consumed 1.081s CPU time.
Oct  1 09:09:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880-merged.mount: Deactivated successfully.
Oct  1 09:09:54 np0005464214 podman[88242]: 2025-10-01 13:09:54.131287013 +0000 UTC m=+1.267227774 container remove 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:09:54 np0005464214 podman[88436]: 2025-10-01 13:09:54.340570686 +0000 UTC m=+0.042331978 container create ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:09:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:54 np0005464214 podman[88436]: 2025-10-01 13:09:54.397355007 +0000 UTC m=+0.099116339 container init ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:09:54 np0005464214 podman[88436]: 2025-10-01 13:09:54.410360021 +0000 UTC m=+0.112121323 container start ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:09:54 np0005464214 bash[88436]: ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7
Oct  1 09:09:54 np0005464214 podman[88436]: 2025-10-01 13:09:54.32576409 +0000 UTC m=+0.027525382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:54 np0005464214 systemd[1]: Started Ceph osd.0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: pidfile_write: ignore empty --pid-file
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:54 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  1 09:09:54 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: load: jerasure load: lrc 
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:09:54 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.131215176 +0000 UTC m=+0.060967490 container create 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:55 np0005464214 systemd[1]: Started libpod-conmon-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope.
Oct  1 09:09:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.098052406 +0000 UTC m=+0.027804800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.198153021 +0000 UTC m=+0.127905355 container init 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.209095338 +0000 UTC m=+0.138847662 container start 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.213062398 +0000 UTC m=+0.142814742 container attach 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:55 np0005464214 systemd[1]: libpod-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope: Deactivated successfully.
Oct  1 09:09:55 np0005464214 intelligent_lichterman[88634]: 167 167
Oct  1 09:09:55 np0005464214 conmon[88634]: conmon 96e41487ec158ed5a2d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope/container/memory.events
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.215560398 +0000 UTC m=+0.145312722 container died 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:09:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-99710fb479b037c90b340b1404799d9010cba1c7f0e0c144690a6d9ed272ffa1-merged.mount: Deactivated successfully.
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:55 np0005464214 podman[88618]: 2025-10-01 13:09:55.255614541 +0000 UTC m=+0.185366895 container remove 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 09:09:55 np0005464214 systemd[1]: libpod-conmon-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope: Deactivated successfully.
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: Deploying daemon osd.1 on compute-0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs mount
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs mount shared_bdev_used = 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Git sha 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DB SUMMARY
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DB Session ID:  YR1W053FNRY3BI19KNCD
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                     Options.env: 0x55b655ef9c70
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                Options.info_log: 0x55b6550f68a0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.write_buffer_manager: 0x55b656002460
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.row_cache: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                              Options.wal_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.wal_compression: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_background_jobs: 4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Compression algorithms supported:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kZSTD supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 podman[88669]: 2025-10-01 13:09:55.54790245 +0000 UTC m=+0.045003362 container create db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f6240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f6240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f6240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4059e439-ad38-467a-9aae-938058dd7e0b
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195553643, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195553842, "job": 1, "event": "recovery_finished"}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: freelist init
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: freelist _read_cfg
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs umount
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) close
Oct  1 09:09:55 np0005464214 systemd[1]: Started libpod-conmon-db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4.scope.
Oct  1 09:09:55 np0005464214 podman[88669]: 2025-10-01 13:09:55.526093618 +0000 UTC m=+0.023194530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:55 np0005464214 podman[88669]: 2025-10-01 13:09:55.65713412 +0000 UTC m=+0.154235032 container init db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:55 np0005464214 podman[88669]: 2025-10-01 13:09:55.67427378 +0000 UTC m=+0.171374672 container start db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:55 np0005464214 podman[88669]: 2025-10-01 13:09:55.678429026 +0000 UTC m=+0.175529908 container attach db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs mount
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluefs mount shared_bdev_used = 4718592
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Git sha 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DB SUMMARY
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DB Session ID:  YR1W053FNRY3BI19KNCC
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                     Options.env: 0x55b6560aa310
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                Options.info_log: 0x55b6553bcf80
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.write_buffer_manager: 0x55b6560026e0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.row_cache: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                              Options.wal_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.wal_compression: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_background_jobs: 4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Compression algorithms supported:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kZSTD supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed060)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b6550e3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4059e439-ad38-467a-9aae-938058dd7e0b
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195818398, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195823312, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4059e439-ad38-467a-9aae-938058dd7e0b", "db_session_id": "YR1W053FNRY3BI19KNCC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195827314, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4059e439-ad38-467a-9aae-938058dd7e0b", "db_session_id": "YR1W053FNRY3BI19KNCC", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195830597, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4059e439-ad38-467a-9aae-938058dd7e0b", "db_session_id": "YR1W053FNRY3BI19KNCC", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195832168, "job": 1, "event": "recovery_finished"}
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b655251c00
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: DB pointer 0x55b655feba00
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: _get_class not permitted to load lua
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: _get_class not permitted to load sdk
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: _get_class not permitted to load test_remote_reads
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 load_pgs
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 load_pgs opened 0 pgs
Oct  1 09:09:55 np0005464214 ceph-osd[88455]: osd.0 0 log_to_monitors true
Oct  1 09:09:55 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0[88451]: 2025-10-01T13:09:55.860+0000 7fbad12bd740 -1 osd.0 0 log_to_monitors true
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct  1 09:09:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  1 09:09:56 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test[88879]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  1 09:09:56 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test[88879]:                            [--no-systemd] [--no-tmpfs]
Oct  1 09:09:56 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test[88879]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  1 09:09:56 np0005464214 systemd[1]: libpod-db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4.scope: Deactivated successfully.
Oct  1 09:09:56 np0005464214 podman[88669]: 2025-10-01 13:09:56.335072912 +0000 UTC m=+0.832173794 container died db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:09:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65-merged.mount: Deactivated successfully.
Oct  1 09:09:56 np0005464214 podman[88669]: 2025-10-01 13:09:56.399484746 +0000 UTC m=+0.896585638 container remove db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:56 np0005464214 systemd[1]: libpod-conmon-db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4.scope: Deactivated successfully.
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:56 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:09:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:09:56 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:09:56 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:56 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:56 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:56 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:56 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  1 09:09:56 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  1 09:09:57 np0005464214 systemd[1]: Reloading.
Oct  1 09:09:57 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:09:57 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:09:57 np0005464214 systemd[1]: Starting Ceph osd.1 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0 done with init, starting boot process
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0 start_boot
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  1 09:09:57 np0005464214 ceph-osd[88455]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:09:57 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:57 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:09:57 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 09:09:57 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct  1 09:09:57 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:57 np0005464214 podman[89259]: 2025-10-01 13:09:57.558631651 +0000 UTC m=+0.052224314 container create 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:09:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:09:57 np0005464214 podman[89259]: 2025-10-01 13:09:57.530698618 +0000 UTC m=+0.024291311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:57 np0005464214 podman[89259]: 2025-10-01 13:09:57.650828394 +0000 UTC m=+0.144421077 container init 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:57 np0005464214 podman[89259]: 2025-10-01 13:09:57.655746002 +0000 UTC m=+0.149338685 container start 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:09:57 np0005464214 podman[89259]: 2025-10-01 13:09:57.662944403 +0000 UTC m=+0.156537086 container attach 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:09:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:58 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct  1 09:09:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:58 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:58 np0005464214 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 09:09:58 np0005464214 bash[89259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 09:09:58 np0005464214 bash[89259]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 09:09:58 np0005464214 bash[89259]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 09:09:58 np0005464214 bash[89259]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:58 np0005464214 bash[89259]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 09:09:58 np0005464214 bash[89259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  1 09:09:58 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: --> ceph-volume raw activate successful for osd ID: 1
Oct  1 09:09:58 np0005464214 bash[89259]: --> ceph-volume raw activate successful for osd ID: 1
Oct  1 09:09:58 np0005464214 systemd[1]: libpod-016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c.scope: Deactivated successfully.
Oct  1 09:09:58 np0005464214 podman[89259]: 2025-10-01 13:09:58.774512134 +0000 UTC m=+1.268104847 container died 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:09:58 np0005464214 systemd[1]: libpod-016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c.scope: Consumed 1.137s CPU time.
Oct  1 09:09:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c-merged.mount: Deactivated successfully.
Oct  1 09:09:58 np0005464214 podman[89259]: 2025-10-01 13:09:58.911613175 +0000 UTC m=+1.405205838 container remove 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:09:59 np0005464214 podman[89464]: 2025-10-01 13:09:59.202520645 +0000 UTC m=+0.070335712 container create c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:09:59 np0005464214 podman[89464]: 2025-10-01 13:09:59.158279226 +0000 UTC m=+0.026094353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:09:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:09:59 np0005464214 podman[89464]: 2025-10-01 13:09:59.304477482 +0000 UTC m=+0.172292549 container init c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:09:59 np0005464214 podman[89464]: 2025-10-01 13:09:59.314496362 +0000 UTC m=+0.182311399 container start c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:09:59 np0005464214 bash[89464]: c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198
Oct  1 09:09:59 np0005464214 systemd[1]: Started Ceph osd.1 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: pidfile_write: ignore empty --pid-file
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:09:59 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct  1 09:09:59 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct  1 09:09:59 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:09:59 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:09:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 09:09:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: load: jerasure load: lrc 
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:09:59 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.033441783 +0000 UTC m=+0.054615380 container create 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:10:00 np0005464214 systemd[1]: Started libpod-conmon-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope.
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.001361555 +0000 UTC m=+0.022535172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.114803953 +0000 UTC m=+0.135977580 container init 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.121023547 +0000 UTC m=+0.142197144 container start 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:10:00 np0005464214 pensive_jones[89661]: 167 167
Oct  1 09:10:00 np0005464214 systemd[1]: libpod-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope: Deactivated successfully.
Oct  1 09:10:00 np0005464214 conmon[89661]: conmon 33d62683fe20605dcd7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope/container/memory.events
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.132692905 +0000 UTC m=+0.153866502 container attach 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.133528808 +0000 UTC m=+0.154702405 container died 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 09:10:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-85ebfcc8b4935e0c911a2f46f05c4d599cde7dec91f38e9b59b8692e06dbb6aa-merged.mount: Deactivated successfully.
Oct  1 09:10:00 np0005464214 podman[89645]: 2025-10-01 13:10:00.243954331 +0000 UTC m=+0.265127948 container remove 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:00 np0005464214 systemd[1]: libpod-conmon-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope: Deactivated successfully.
Oct  1 09:10:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs mount
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs mount shared_bdev_used = 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Git sha 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DB SUMMARY
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DB Session ID:  4TQUBN3XRRRFZHEOXA8H
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                     Options.env: 0x55f3dcc2dce0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                Options.info_log: 0x55f3dbe208a0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.write_buffer_manager: 0x55f3dcd36460
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.row_cache: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                              Options.wal_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.wal_compression: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_background_jobs: 4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Compression algorithms supported:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kZSTD supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:00 np0005464214 podman[89698]: 2025-10-01 13:10:00.477574107 +0000 UTC m=+0.046274918 container create 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200481995, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200482181, "job": 1, "event": "recovery_finished"}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: freelist init
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: freelist _read_cfg
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs umount
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) close
Oct  1 09:10:00 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct  1 09:10:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:10:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:10:00 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:10:00 np0005464214 systemd[1]: Started libpod-conmon-68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118.scope.
Oct  1 09:10:00 np0005464214 ceph-mon[74802]: Deploying daemon osd.2 on compute-0
Oct  1 09:10:00 np0005464214 podman[89698]: 2025-10-01 13:10:00.454347336 +0000 UTC m=+0.023048177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:00 np0005464214 podman[89698]: 2025-10-01 13:10:00.588623787 +0000 UTC m=+0.157324659 container init 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:10:00 np0005464214 podman[89698]: 2025-10-01 13:10:00.60191709 +0000 UTC m=+0.170617911 container start 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:00 np0005464214 podman[89698]: 2025-10-01 13:10:00.621723255 +0000 UTC m=+0.190424086 container attach 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs mount
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluefs mount shared_bdev_used = 4718592
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Git sha 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DB SUMMARY
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DB Session ID:  4TQUBN3XRRRFZHEOXA8G
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                     Options.env: 0x55f3dcdde460
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                Options.info_log: 0x55f3dbe20600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.write_buffer_manager: 0x55f3dcd366e0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.row_cache: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                              Options.wal_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.wal_compression: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_background_jobs: 4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Compression algorithms supported:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kZSTD supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dc0e6d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dc0e6d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dc0e6d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f3dbe0d090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200781843, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200787867, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324200, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b", "db_session_id": "4TQUBN3XRRRFZHEOXA8G", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200795082, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324200, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b", "db_session_id": "4TQUBN3XRRRFZHEOXA8G", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200801634, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324200, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b", "db_session_id": "4TQUBN3XRRRFZHEOXA8G", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200803284, "job": 1, "event": "recovery_finished"}
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f3dcdebc00
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: DB pointer 0x55f3dcd1fa00
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: _get_class not permitted to load lua
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: _get_class not permitted to load sdk
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: _get_class not permitted to load test_remote_reads
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 load_pgs
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 load_pgs opened 0 pgs
Oct  1 09:10:00 np0005464214 ceph-osd[89484]: osd.1 0 log_to_monitors true
Oct  1 09:10:00 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1[89480]: 2025-10-01T13:10:00.861+0000 7f5cc0ed4740 -1 osd.1 0 log_to_monitors true
Oct  1 09:10:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct  1 09:10:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.477 iops: 8314.022 elapsed_sec: 0.361
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: log_channel(cluster) log [WRN] : OSD bench result of 8314.022192 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 0 waiting for initial osdmap
Oct  1 09:10:01 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0[88451]: 2025-10-01T13:10:01.039+0000 7fbacd23d640 -1 osd.0 0 waiting for initial osdmap
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  1 09:10:01 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0[88451]: 2025-10-01T13:10:01.069+0000 7fbac8865640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  1 09:10:01 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test[89909]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  1 09:10:01 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test[89909]:                            [--no-systemd] [--no-tmpfs]
Oct  1 09:10:01 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test[89909]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  1 09:10:01 np0005464214 systemd[1]: libpod-68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118.scope: Deactivated successfully.
Oct  1 09:10:01 np0005464214 podman[89698]: 2025-10-01 13:10:01.262773074 +0000 UTC m=+0.831473885 container died 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0-merged.mount: Deactivated successfully.
Oct  1 09:10:01 np0005464214 podman[89698]: 2025-10-01 13:10:01.3257896 +0000 UTC m=+0.894490411 container remove 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:10:01 np0005464214 systemd[1]: libpod-conmon-68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118.scope: Deactivated successfully.
Oct  1 09:10:01 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  1 09:10:01 np0005464214 systemd[1]: Reloading.
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362] boot
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:01 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:01 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:01 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:10:01 np0005464214 ceph-osd[88455]: osd.0 9 state: booting -> active
Oct  1 09:10:01 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:10:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  1 09:10:01 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] creating mgr pool
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct  1 09:10:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  1 09:10:01 np0005464214 systemd[1]: Reloading.
Oct  1 09:10:01 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  1 09:10:01 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  1 09:10:01 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:10:01 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:10:02 np0005464214 systemd[1]: Starting Ceph osd.2 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:10:02 np0005464214 podman[90289]: 2025-10-01 13:10:02.397200316 +0000 UTC m=+0.081653189 container create bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:02 np0005464214 podman[90289]: 2025-10-01 13:10:02.336182516 +0000 UTC m=+0.020635429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  1 09:10:02 np0005464214 podman[90289]: 2025-10-01 13:10:02.615182893 +0000 UTC m=+0.299635816 container init bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:10:02 np0005464214 podman[90289]: 2025-10-01 13:10:02.625680416 +0000 UTC m=+0.310133289 container start bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0 done with init, starting boot process
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0 start_boot
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  1 09:10:02 np0005464214 ceph-osd[89484]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct  1 09:10:02 np0005464214 podman[90289]: 2025-10-01 13:10:02.709333961 +0000 UTC m=+0.393786834 container attach bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:02 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:02 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  1 09:10:02 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:02 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: OSD bench result of 8314.022192 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362] boot
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 09:10:02 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  1 09:10:02 np0005464214 ceph-osd[88455]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  1 09:10:02 np0005464214 ceph-osd[88455]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  1 09:10:02 np0005464214 ceph-osd[88455]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 09:10:03 np0005464214 bash[90289]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 09:10:03 np0005464214 bash[90289]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:03 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:03 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 09:10:03 np0005464214 bash[90289]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 09:10:03 np0005464214 bash[90289]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:03 np0005464214 bash[90289]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 09:10:03 np0005464214 bash[90289]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  1 09:10:03 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: --> ceph-volume raw activate successful for osd ID: 2
Oct  1 09:10:03 np0005464214 bash[90289]: --> ceph-volume raw activate successful for osd ID: 2
Oct  1 09:10:03 np0005464214 systemd[1]: libpod-bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172.scope: Deactivated successfully.
Oct  1 09:10:03 np0005464214 podman[90289]: 2025-10-01 13:10:03.781077876 +0000 UTC m=+1.465530749 container died bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:03 np0005464214 systemd[1]: libpod-bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172.scope: Consumed 1.170s CPU time.
Oct  1 09:10:03 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:03 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589-merged.mount: Deactivated successfully.
Oct  1 09:10:03 np0005464214 podman[90289]: 2025-10-01 13:10:03.923920318 +0000 UTC m=+1.608373191 container remove bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  1 09:10:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  1 09:10:04 np0005464214 podman[90481]: 2025-10-01 13:10:04.14390752 +0000 UTC m=+0.065915657 container create 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:04 np0005464214 podman[90481]: 2025-10-01 13:10:04.113612732 +0000 UTC m=+0.035620849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 podman[90481]: 2025-10-01 13:10:04.272140803 +0000 UTC m=+0.194148940 container init 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:04 np0005464214 podman[90481]: 2025-10-01 13:10:04.279180771 +0000 UTC m=+0.201188908 container start 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:10:04 np0005464214 bash[90481]: 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac
Oct  1 09:10:04 np0005464214 systemd[1]: Started Ceph osd.2 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: pidfile_write: ignore empty --pid-file
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 09:10:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 09:10:04 np0005464214 python3[90571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:04 np0005464214 podman[90639]: 2025-10-01 13:10:04.799020494 +0000 UTC m=+0.079300202 container create 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:10:04 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct  1 09:10:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:04 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:04 np0005464214 podman[90639]: 2025-10-01 13:10:04.748076696 +0000 UTC m=+0.028356424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:04 np0005464214 systemd[1]: Started libpod-conmon-396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b.scope.
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  1 09:10:04 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: load: jerasure load: lrc 
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:10:04 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:04 np0005464214 podman[90639]: 2025-10-01 13:10:04.912450312 +0000 UTC m=+0.192730050 container init 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:10:04 np0005464214 podman[90639]: 2025-10-01 13:10:04.921564057 +0000 UTC m=+0.201843765 container start 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:10:04 np0005464214 podman[90639]: 2025-10-01 13:10:04.933552673 +0000 UTC m=+0.213832371 container attach 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.139122552 +0000 UTC m=+0.063231263 container create 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 09:10:05 np0005464214 systemd[1]: Started libpod-conmon-73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8.scope.
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.102116365 +0000 UTC m=+0.026225066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.237152249 +0000 UTC m=+0.161261000 container init 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.246252633 +0000 UTC m=+0.170361304 container start 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:10:05 np0005464214 laughing_noyce[90727]: 167 167
Oct  1 09:10:05 np0005464214 systemd[1]: libpod-73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8.scope: Deactivated successfully.
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.259060182 +0000 UTC m=+0.183168863 container attach 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.259415902 +0000 UTC m=+0.183524573 container died 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:10:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f3186858516f42c3227d00219d0b46e80f3efccf1556d8b8b8711b8a8d942ece-merged.mount: Deactivated successfully.
Oct  1 09:10:05 np0005464214 podman[90707]: 2025-10-01 13:10:05.324721931 +0000 UTC m=+0.248830612 container remove 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:05 np0005464214 systemd[1]: libpod-conmon-73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8.scope: Deactivated successfully.
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs mount
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs mount shared_bdev_used = 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Git sha 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DB SUMMARY
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DB Session ID:  7GQH8GJG7ZFW7CY52MVW
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                     Options.env: 0x55b1ae99dc70
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                Options.info_log: 0x55b1adb9a8a0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.write_buffer_manager: 0x55b1aeab0460
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.row_cache: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                              Options.wal_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.wal_compression: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_background_jobs: 4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Compression algorithms supported:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kZSTD supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2133678-e23b-4ce6-a6b1-f49e8e1c0754
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205479113, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205479390, "job": 1, "event": "recovery_finished"}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: freelist init
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: freelist _read_cfg
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs umount
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) close
Oct  1 09:10:05 np0005464214 podman[90780]: 2025-10-01 13:10:05.515702922 +0000 UTC m=+0.044293612 container create 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715268222' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 09:10:05 np0005464214 naughty_curie[90661]: 
Oct  1 09:10:05 np0005464214 naughty_curie[90661]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1759324201,"num_in_osds":3,"osd_in_since":1759324184,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T13:09:49.717098+0000","services":{}},"progress_events":{}}
Oct  1 09:10:05 np0005464214 podman[90639]: 2025-10-01 13:10:05.561531436 +0000 UTC m=+0.841811124 container died 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:10:05 np0005464214 systemd[1]: Started libpod-conmon-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope.
Oct  1 09:10:05 np0005464214 systemd[1]: libpod-396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b.scope: Deactivated successfully.
Oct  1 09:10:05 np0005464214 podman[90780]: 2025-10-01 13:10:05.501177095 +0000 UTC m=+0.029767765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1-merged.mount: Deactivated successfully.
Oct  1 09:10:05 np0005464214 podman[90780]: 2025-10-01 13:10:05.642544206 +0000 UTC m=+0.171134876 container init 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  1 09:10:05 np0005464214 podman[90780]: 2025-10-01 13:10:05.651400384 +0000 UTC m=+0.179991054 container start 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:05 np0005464214 podman[90780]: 2025-10-01 13:10:05.666794445 +0000 UTC m=+0.195385165 container attach 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:10:05 np0005464214 podman[90639]: 2025-10-01 13:10:05.674331937 +0000 UTC m=+0.954611645 container remove 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:10:05 np0005464214 systemd[1]: libpod-conmon-396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b.scope: Deactivated successfully.
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs mount
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  1 09:10:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluefs mount shared_bdev_used = 4718592
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: RocksDB version: 7.9.2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Git sha 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DB SUMMARY
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DB Session ID:  7GQH8GJG7ZFW7CY52MVX
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: CURRENT file:  CURRENT
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: IDENTITY file:  IDENTITY
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.error_if_exists: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.create_if_missing: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.paranoid_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                     Options.env: 0x55b1aeb58460
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                Options.info_log: 0x55b1adb9a600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_file_opening_threads: 16
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                              Options.statistics: (nil)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.use_fsync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.max_log_file_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.allow_fallocate: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.use_direct_reads: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.create_missing_column_families: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                              Options.db_log_dir: 
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                                 Options.wal_dir: db.wal
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.advise_random_on_open: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.write_buffer_manager: 0x55b1aeab0460
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                            Options.rate_limiter: (nil)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.unordered_write: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.row_cache: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                              Options.wal_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.allow_ingest_behind: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.two_write_queues: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.manual_wal_flush: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.wal_compression: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.atomic_flush: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.log_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.allow_data_in_errors: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.db_host_id: __hostname__
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_background_jobs: 4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_background_compactions: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_subcompactions: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.max_open_files: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.bytes_per_sync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.max_background_flushes: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Compression algorithms supported:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kZSTD supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kXpressCompression supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kBZip2Compression supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kLZ4Compression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kZlibCompression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: #011kSnappyCompression supported: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb871f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55b1adb87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2133678-e23b-4ce6-a6b1-f49e8e1c0754
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205753956, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205759316, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2133678-e23b-4ce6-a6b1-f49e8e1c0754", "db_session_id": "7GQH8GJG7ZFW7CY52MVX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205761564, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2133678-e23b-4ce6-a6b1-f49e8e1c0754", "db_session_id": "7GQH8GJG7ZFW7CY52MVX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205763648, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2133678-e23b-4ce6-a6b1-f49e8e1c0754", "db_session_id": "7GQH8GJG7ZFW7CY52MVX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205768540, "job": 1, "event": "recovery_finished"}
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b1adcf4000
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: DB pointer 0x55b1aea8fa00
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usag
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: _get_class not permitted to load lua
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: _get_class not permitted to load sdk
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: _get_class not permitted to load test_remote_reads
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 load_pgs
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 load_pgs opened 0 pgs
Oct  1 09:10:05 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2[90496]: 2025-10-01T13:10:05.808+0000 7f715a025740 -1 osd.2 0 log_to_monitors true
Oct  1 09:10:05 np0005464214 ceph-osd[90500]: osd.2 0 log_to_monitors true
Oct  1 09:10:05 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:05 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct  1 09:10:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  1 09:10:05 np0005464214 ceph-osd[89484]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.124 iops: 8479.669 elapsed_sec: 0.354
Oct  1 09:10:05 np0005464214 ceph-osd[89484]: log_channel(cluster) log [WRN] : OSD bench result of 8479.668600 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 09:10:05 np0005464214 ceph-osd[89484]: osd.1 0 waiting for initial osdmap
Oct  1 09:10:05 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1[89480]: 2025-10-01T13:10:05.997+0000 7f5cbd66b640 -1 osd.1 0 waiting for initial osdmap
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 09:10:06 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1[89480]: 2025-10-01T13:10:06.018+0000 7f5cb847c640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 set_numa_affinity not setting numa affinity
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct  1 09:10:06 np0005464214 python3[91239]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:06 np0005464214 podman[91240]: 2025-10-01 13:10:06.20705709 +0000 UTC m=+0.049868377 container create 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:10:06 np0005464214 systemd[1]: Started libpod-conmon-56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223.scope.
Oct  1 09:10:06 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4705cdcec5a9f1a9b3949a67ae92f5a2472982adf67814bbf6539386925d217c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4705cdcec5a9f1a9b3949a67ae92f5a2472982adf67814bbf6539386925d217c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:06 np0005464214 podman[91240]: 2025-10-01 13:10:06.183600814 +0000 UTC m=+0.026412131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:06 np0005464214 podman[91240]: 2025-10-01 13:10:06.291110415 +0000 UTC m=+0.133921702 container init 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:10:06 np0005464214 podman[91240]: 2025-10-01 13:10:06.302136094 +0000 UTC m=+0.144947361 container start 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:06 np0005464214 podman[91240]: 2025-10-01 13:10:06.305835488 +0000 UTC m=+0.148646765 container attach 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462] boot
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:06 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 12 state: booting -> active
Oct  1 09:10:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:06 np0005464214 practical_edison[90982]: {
Oct  1 09:10:06 np0005464214 practical_edison[90982]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "osd_id": 0,
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "type": "bluestore"
Oct  1 09:10:06 np0005464214 practical_edison[90982]:    },
Oct  1 09:10:06 np0005464214 practical_edison[90982]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "osd_id": 2,
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "type": "bluestore"
Oct  1 09:10:06 np0005464214 practical_edison[90982]:    },
Oct  1 09:10:06 np0005464214 practical_edison[90982]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "osd_id": 1,
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:06 np0005464214 practical_edison[90982]:        "type": "bluestore"
Oct  1 09:10:06 np0005464214 practical_edison[90982]:    }
Oct  1 09:10:06 np0005464214 practical_edison[90982]: }
Oct  1 09:10:06 np0005464214 systemd[1]: libpod-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope: Deactivated successfully.
Oct  1 09:10:06 np0005464214 podman[90780]: 2025-10-01 13:10:06.678821448 +0000 UTC m=+1.207412118 container died 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:10:06 np0005464214 systemd[1]: libpod-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope: Consumed 1.026s CPU time.
Oct  1 09:10:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9-merged.mount: Deactivated successfully.
Oct  1 09:10:06 np0005464214 podman[90780]: 2025-10-01 13:10:06.738033037 +0000 UTC m=+1.266623687 container remove 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:06 np0005464214 systemd[1]: libpod-conmon-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope: Deactivated successfully.
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  1 09:10:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 09:10:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0 done with init, starting boot process
Oct  1 09:10:07 np0005464214 pedantic_edison[91255]: pool 'vms' created
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0 start_boot
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  1 09:10:07 np0005464214 ceph-osd[90500]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: OSD bench result of 8479.668600 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462] boot
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:07 np0005464214 systemd[1]: libpod-56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223.scope: Deactivated successfully.
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:07 np0005464214 podman[91240]: 2025-10-01 13:10:07.408421127 +0000 UTC m=+1.251232404 container died 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4705cdcec5a9f1a9b3949a67ae92f5a2472982adf67814bbf6539386925d217c-merged.mount: Deactivated successfully.
Oct  1 09:10:07 np0005464214 podman[91240]: 2025-10-01 13:10:07.518825311 +0000 UTC m=+1.361636578 container remove 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] creating main.db for devicehealth
Oct  1 09:10:07 np0005464214 systemd[1]: libpod-conmon-56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223.scope: Deactivated successfully.
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  1 09:10:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v44: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  1 09:10:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  1 09:10:07 np0005464214 podman[91598]: 2025-10-01 13:10:07.826646285 +0000 UTC m=+0.058949103 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:07 np0005464214 python3[91581]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:07 np0005464214 podman[91598]: 2025-10-01 13:10:07.93605822 +0000 UTC m=+0.168361018 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:10:07 np0005464214 podman[91618]: 2025-10-01 13:10:07.995250137 +0000 UTC m=+0.083652914 container create a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 09:10:08 np0005464214 systemd[1]: Started libpod-conmon-a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721.scope.
Oct  1 09:10:08 np0005464214 podman[91618]: 2025-10-01 13:10:07.95644893 +0000 UTC m=+0.044851707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483cd5114d3229f0d5103a7b8ed10145d96047c9fe97b447629a05b6dc545aff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483cd5114d3229f0d5103a7b8ed10145d96047c9fe97b447629a05b6dc545aff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:08 np0005464214 podman[91618]: 2025-10-01 13:10:08.085133225 +0000 UTC m=+0.173536012 container init a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:10:08 np0005464214 podman[91618]: 2025-10-01 13:10:08.093455119 +0000 UTC m=+0.181857896 container start a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:08 np0005464214 podman[91618]: 2025-10-01 13:10:08.10027215 +0000 UTC m=+0.188674927 container attach a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:10:08 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:08 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.puxjpb(active, since 80s)
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:08 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 09:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.100670467 +0000 UTC m=+0.055568999 container create 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:10:09 np0005464214 systemd[1]: Started libpod-conmon-028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e.scope.
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.069835073 +0000 UTC m=+0.024733705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.211042469 +0000 UTC m=+0.165941091 container init 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.218433236 +0000 UTC m=+0.173331768 container start 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:10:09 np0005464214 pedantic_feynman[91912]: 167 167
Oct  1 09:10:09 np0005464214 systemd[1]: libpod-028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e.scope: Deactivated successfully.
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.236124411 +0000 UTC m=+0.191022973 container attach 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.237071998 +0000 UTC m=+0.191970540 container died 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:10:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c5e7d07730e6bf319a0cf69b32c0a26116b06e6edea2a3e4f6427fb3d1413814-merged.mount: Deactivated successfully.
Oct  1 09:10:09 np0005464214 podman[91896]: 2025-10-01 13:10:09.340411883 +0000 UTC m=+0.295310425 container remove 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:09 np0005464214 systemd[1]: libpod-conmon-028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e.scope: Deactivated successfully.
Oct  1 09:10:09 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:09 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:09 np0005464214 podman[91938]: 2025-10-01 13:10:09.520348034 +0000 UTC m=+0.065110425 container create 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  1 09:10:09 np0005464214 podman[91938]: 2025-10-01 13:10:09.481971099 +0000 UTC m=+0.026733540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct  1 09:10:09 np0005464214 blissful_engelbart[91661]: pool 'volumes' created
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:09 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:09 np0005464214 systemd[1]: Started libpod-conmon-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope.
Oct  1 09:10:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:09 np0005464214 systemd[1]: libpod-a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721.scope: Deactivated successfully.
Oct  1 09:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:09 np0005464214 podman[91938]: 2025-10-01 13:10:09.653752611 +0000 UTC m=+0.198514962 container init 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:10:09 np0005464214 podman[91938]: 2025-10-01 13:10:09.663523876 +0000 UTC m=+0.208286247 container start 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:09 np0005464214 podman[91938]: 2025-10-01 13:10:09.667662591 +0000 UTC m=+0.212424932 container attach 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:09 np0005464214 podman[91958]: 2025-10-01 13:10:09.671753746 +0000 UTC m=+0.035582158 container died a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:10:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-483cd5114d3229f0d5103a7b8ed10145d96047c9fe97b447629a05b6dc545aff-merged.mount: Deactivated successfully.
Oct  1 09:10:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v47: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  1 09:10:09 np0005464214 podman[91958]: 2025-10-01 13:10:09.759923016 +0000 UTC m=+0.123751428 container remove a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:09 np0005464214 systemd[1]: libpod-conmon-a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721.scope: Deactivated successfully.
Oct  1 09:10:10 np0005464214 python3[92000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:10 np0005464214 podman[92001]: 2025-10-01 13:10:10.187431642 +0000 UTC m=+0.051481382 container create 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:10:10 np0005464214 systemd[1]: Started libpod-conmon-057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d.scope.
Oct  1 09:10:10 np0005464214 podman[92001]: 2025-10-01 13:10:10.156325882 +0000 UTC m=+0.020375622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580ba48e516ebb60537667de8afc821dfc7165a1fb407c15e38fda2ac9a6043/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580ba48e516ebb60537667de8afc821dfc7165a1fb407c15e38fda2ac9a6043/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:10 np0005464214 podman[92001]: 2025-10-01 13:10:10.279465632 +0000 UTC m=+0.143515362 container init 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:10 np0005464214 podman[92001]: 2025-10-01 13:10:10.289997936 +0000 UTC m=+0.154047686 container start 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:10 np0005464214 podman[92001]: 2025-10-01 13:10:10.299814361 +0000 UTC m=+0.163864101 container attach 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:10:10 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:10 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:10 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:10 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 09:10:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.384 iops: 8034.385 elapsed_sec: 0.373
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: log_channel(cluster) log [WRN] : OSD bench result of 8034.385004 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 0 waiting for initial osdmap
Oct  1 09:10:10 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2[90496]: 2025-10-01T13:10:10.868+0000 7f7155fa5640 -1 osd.2 0 waiting for initial osdmap
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 set_numa_affinity not setting numa affinity
Oct  1 09:10:10 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2[90496]: 2025-10-01T13:10:10.895+0000 7f71515cd640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  1 09:10:10 np0005464214 ceph-osd[90500]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]: [
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:    {
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "available": false,
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "ceph_device": false,
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "lsm_data": {},
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "lvs": [],
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "path": "/dev/sr0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "rejected_reasons": [
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "Has a FileSystem",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "Insufficient space (<5GB)"
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        ],
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        "sys_api": {
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "actuators": null,
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "device_nodes": "sr0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "devname": "sr0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "human_readable_size": "482.00 KB",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "id_bus": "ata",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "model": "QEMU DVD-ROM",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "nr_requests": "2",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "parent": "/dev/sr0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "partitions": {},
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "path": "/dev/sr0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "removable": "1",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "rev": "2.5+",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "ro": "0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "rotational": "0",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "sas_address": "",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "sas_device_handle": "",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "scheduler_mode": "mq-deadline",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "sectors": 0,
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "sectorsize": "2048",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "size": 493568.0,
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "support_discard": "2048",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "type": "disk",
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:            "vendor": "QEMU"
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:        }
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]:    }
Oct  1 09:10:11 np0005464214 exciting_matsumoto[91955]: ]
Oct  1 09:10:11 np0005464214 systemd[1]: libpod-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope: Deactivated successfully.
Oct  1 09:10:11 np0005464214 systemd[1]: libpod-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope: Consumed 1.347s CPU time.
Oct  1 09:10:11 np0005464214 podman[91938]: 2025-10-01 13:10:11.030317286 +0000 UTC m=+1.575079637 container died 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529-merged.mount: Deactivated successfully.
Oct  1 09:10:11 np0005464214 podman[91938]: 2025-10-01 13:10:11.094071973 +0000 UTC m=+1.638834324 container remove 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:10:11 np0005464214 systemd[1]: libpod-conmon-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope: Deactivated successfully.
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43640k
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43640k
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b1923273-3f9b-4fc1-89ce-482262f6440e does not exist
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 89b5ae69-71f3-4f81-ab44-72cf218a25cb does not exist
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f9371bef-2542-4079-9f16-3b23f3469d42 does not exist
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct  1 09:10:11 np0005464214 serene_benz[92017]: pool 'backups' created
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069] boot
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  1 09:10:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  1 09:10:11 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:11 np0005464214 ceph-osd[90500]: osd.2 17 state: booting -> active
Oct  1 09:10:11 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:11 np0005464214 systemd[1]: libpod-057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d.scope: Deactivated successfully.
Oct  1 09:10:11 np0005464214 podman[92001]: 2025-10-01 13:10:11.658753043 +0000 UTC m=+1.522802773 container died 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  1 09:10:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8580ba48e516ebb60537667de8afc821dfc7165a1fb407c15e38fda2ac9a6043-merged.mount: Deactivated successfully.
Oct  1 09:10:11 np0005464214 podman[92001]: 2025-10-01 13:10:11.704835003 +0000 UTC m=+1.568884723 container remove 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:10:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v50: 4 pgs: 3 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  1 09:10:11 np0005464214 systemd[1]: libpod-conmon-057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d.scope: Deactivated successfully.
Oct  1 09:10:11 np0005464214 podman[93832]: 2025-10-01 13:10:11.877793789 +0000 UTC m=+0.070102675 container create c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:11 np0005464214 systemd[1]: Started libpod-conmon-c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e.scope.
Oct  1 09:10:11 np0005464214 podman[93832]: 2025-10-01 13:10:11.845802073 +0000 UTC m=+0.038111009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:11 np0005464214 podman[93832]: 2025-10-01 13:10:11.961885735 +0000 UTC m=+0.154194621 container init c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:10:11 np0005464214 podman[93832]: 2025-10-01 13:10:11.974027415 +0000 UTC m=+0.166336301 container start c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:10:11 np0005464214 podman[93832]: 2025-10-01 13:10:11.978023427 +0000 UTC m=+0.170332353 container attach c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:10:11 np0005464214 musing_shaw[93873]: 167 167
Oct  1 09:10:11 np0005464214 systemd[1]: libpod-c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e.scope: Deactivated successfully.
Oct  1 09:10:11 np0005464214 podman[93832]: 2025-10-01 13:10:11.9824296 +0000 UTC m=+0.174738496 container died c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:10:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0f948d4136eb64b500db2d7d33024bf16659353a6a70f835f5b9ad668e580b65-merged.mount: Deactivated successfully.
Oct  1 09:10:12 np0005464214 podman[93832]: 2025-10-01 13:10:12.028394278 +0000 UTC m=+0.220703164 container remove c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:12 np0005464214 systemd[1]: libpod-conmon-c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e.scope: Deactivated successfully.
Oct  1 09:10:12 np0005464214 python3[93875]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:12 np0005464214 podman[93898]: 2025-10-01 13:10:12.18013046 +0000 UTC m=+0.041072843 container create cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:10:12 np0005464214 podman[93899]: 2025-10-01 13:10:12.18479691 +0000 UTC m=+0.044699793 container create 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:12 np0005464214 systemd[1]: Started libpod-conmon-6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45.scope.
Oct  1 09:10:12 np0005464214 systemd[1]: Started libpod-conmon-cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667.scope.
Oct  1 09:10:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a47ef7c19deb3dd36d2fbd387a35ce1794264830eeab44d03f790e5a13f301/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a47ef7c19deb3dd36d2fbd387a35ce1794264830eeab44d03f790e5a13f301/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:12 np0005464214 podman[93898]: 2025-10-01 13:10:12.161941459 +0000 UTC m=+0.022883872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:12 np0005464214 podman[93899]: 2025-10-01 13:10:12.161709843 +0000 UTC m=+0.021612746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:12 np0005464214 podman[93899]: 2025-10-01 13:10:12.276051656 +0000 UTC m=+0.135954589 container init 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:10:12 np0005464214 podman[93898]: 2025-10-01 13:10:12.280262425 +0000 UTC m=+0.141204818 container init cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:12 np0005464214 podman[93899]: 2025-10-01 13:10:12.295845341 +0000 UTC m=+0.155748224 container start 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:10:12 np0005464214 podman[93899]: 2025-10-01 13:10:12.300494172 +0000 UTC m=+0.160397075 container attach 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:10:12 np0005464214 podman[93898]: 2025-10-01 13:10:12.303629839 +0000 UTC m=+0.164572262 container start cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:10:12 np0005464214 podman[93898]: 2025-10-01 13:10:12.309159714 +0000 UTC m=+0.170102117 container attach cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: OSD bench result of 8034.385004 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: Adjusting osd_memory_target on compute-0 to 43640k
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069] boot
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct  1 09:10:12 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:12 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 09:10:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:13 np0005464214 exciting_nash[93931]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:10:13 np0005464214 exciting_nash[93931]: --> relative data size: 1.0
Oct  1 09:10:13 np0005464214 exciting_nash[93931]: --> All data devices are unavailable
Oct  1 09:10:13 np0005464214 systemd[1]: libpod-cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667.scope: Deactivated successfully.
Oct  1 09:10:13 np0005464214 podman[93898]: 2025-10-01 13:10:13.363682997 +0000 UTC m=+1.224625380 container died cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:10:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32-merged.mount: Deactivated successfully.
Oct  1 09:10:13 np0005464214 podman[93898]: 2025-10-01 13:10:13.422592738 +0000 UTC m=+1.283535141 container remove cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:10:13 np0005464214 systemd[1]: libpod-conmon-cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667.scope: Deactivated successfully.
Oct  1 09:10:13 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  1 09:10:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct  1 09:10:13 np0005464214 quirky_panini[93929]: pool 'images' created
Oct  1 09:10:13 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct  1 09:10:13 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:13 np0005464214 systemd[1]: libpod-6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45.scope: Deactivated successfully.
Oct  1 09:10:13 np0005464214 podman[93899]: 2025-10-01 13:10:13.674655269 +0000 UTC m=+1.534558192 container died 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:10:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d5a47ef7c19deb3dd36d2fbd387a35ce1794264830eeab44d03f790e5a13f301-merged.mount: Deactivated successfully.
Oct  1 09:10:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v53: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:13 np0005464214 podman[93899]: 2025-10-01 13:10:13.726656976 +0000 UTC m=+1.586559859 container remove 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:10:13 np0005464214 systemd[1]: libpod-conmon-6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45.scope: Deactivated successfully.
Oct  1 09:10:14 np0005464214 python3[94147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.054881371 +0000 UTC m=+0.036513614 container create d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 09:10:14 np0005464214 systemd[1]: Started libpod-conmon-d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f.scope.
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.097382762 +0000 UTC m=+0.043600203 container create 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:14 np0005464214 systemd[1]: Started libpod-conmon-97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4.scope.
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.038646837 +0000 UTC m=+0.020279090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.136075406 +0000 UTC m=+0.117707659 container init d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16043874e8b148866b76e79a2771d026406cf5d9aa0a3d1405616265a4f945db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16043874e8b148866b76e79a2771d026406cf5d9aa0a3d1405616265a4f945db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.144039179 +0000 UTC m=+0.125671402 container start d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.148053982 +0000 UTC m=+0.129686215 container attach d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  1 09:10:14 np0005464214 reverent_northcutt[94204]: 167 167
Oct  1 09:10:14 np0005464214 systemd[1]: libpod-d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f.scope: Deactivated successfully.
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.151274392 +0000 UTC m=+0.097491853 container init 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.151836867 +0000 UTC m=+0.133469100 container died d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.156861969 +0000 UTC m=+0.103079400 container start 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.162669041 +0000 UTC m=+0.108886472 container attach 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.080315724 +0000 UTC m=+0.026533185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-947e2e6d9da180b650ab9b5acb7e4887d413bce61b5bacd978b1deccbbe894a4-merged.mount: Deactivated successfully.
Oct  1 09:10:14 np0005464214 podman[94175]: 2025-10-01 13:10:14.191039525 +0000 UTC m=+0.172671758 container remove d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:10:14 np0005464214 systemd[1]: libpod-conmon-d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f.scope: Deactivated successfully.
Oct  1 09:10:14 np0005464214 podman[94234]: 2025-10-01 13:10:14.342959312 +0000 UTC m=+0.055458215 container create df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:14 np0005464214 systemd[1]: Started libpod-conmon-df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e.scope.
Oct  1 09:10:14 np0005464214 podman[94234]: 2025-10-01 13:10:14.308891597 +0000 UTC m=+0.021390550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:14 np0005464214 podman[94234]: 2025-10-01 13:10:14.424001732 +0000 UTC m=+0.136500655 container init df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:10:14 np0005464214 podman[94234]: 2025-10-01 13:10:14.429713652 +0000 UTC m=+0.142212565 container start df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:14 np0005464214 podman[94234]: 2025-10-01 13:10:14.433075347 +0000 UTC m=+0.145574270 container attach df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct  1 09:10:14 np0005464214 fervent_napier[94209]: pool 'cephfs.cephfs.meta' created
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct  1 09:10:14 np0005464214 systemd[1]: libpod-97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4.scope: Deactivated successfully.
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.685670023 +0000 UTC m=+0.631887454 container died 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:14 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:14 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-16043874e8b148866b76e79a2771d026406cf5d9aa0a3d1405616265a4f945db-merged.mount: Deactivated successfully.
Oct  1 09:10:14 np0005464214 podman[94189]: 2025-10-01 13:10:14.726232809 +0000 UTC m=+0.672450240 container remove 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:10:14 np0005464214 systemd[1]: libpod-conmon-97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4.scope: Deactivated successfully.
Oct  1 09:10:14 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:15 np0005464214 python3[94315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:15 np0005464214 podman[94316]: 2025-10-01 13:10:15.083659603 +0000 UTC m=+0.045811054 container create a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:10:15 np0005464214 systemd[1]: Started libpod-conmon-a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296.scope.
Oct  1 09:10:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae9acd5d3501a1d8943a5c0d522a00416aefdd2f607303d52e390a2da93cd02/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae9acd5d3501a1d8943a5c0d522a00416aefdd2f607303d52e390a2da93cd02/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:15 np0005464214 podman[94316]: 2025-10-01 13:10:15.064046494 +0000 UTC m=+0.026197995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]: {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:    "0": [
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:        {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "devices": [
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "/dev/loop3"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            ],
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_name": "ceph_lv0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_size": "21470642176",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "name": "ceph_lv0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "tags": {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.crush_device_class": "",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.encrypted": "0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osd_id": "0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.type": "block",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.vdo": "0"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            },
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "type": "block",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "vg_name": "ceph_vg0"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:        }
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:    ],
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:    "1": [
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:        {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "devices": [
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "/dev/loop4"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            ],
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_name": "ceph_lv1",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_size": "21470642176",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "name": "ceph_lv1",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "tags": {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.crush_device_class": "",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.encrypted": "0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osd_id": "1",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.type": "block",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.vdo": "0"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            },
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "type": "block",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "vg_name": "ceph_vg1"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:        }
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:    ],
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:    "2": [
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:        {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "devices": [
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "/dev/loop5"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            ],
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_name": "ceph_lv2",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_size": "21470642176",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "name": "ceph_lv2",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "tags": {
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.crush_device_class": "",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.encrypted": "0",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osd_id": "2",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.type": "block",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:                "ceph.vdo": "0"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            },
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "type": "block",
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:            "vg_name": "ceph_vg2"
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:        }
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]:    ]
Oct  1 09:10:15 np0005464214 ecstatic_banzai[94250]: }
Oct  1 09:10:15 np0005464214 podman[94316]: 2025-10-01 13:10:15.163179081 +0000 UTC m=+0.125330582 container init a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:15 np0005464214 podman[94316]: 2025-10-01 13:10:15.170687811 +0000 UTC m=+0.132839272 container start a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:10:15 np0005464214 podman[94316]: 2025-10-01 13:10:15.174535659 +0000 UTC m=+0.136687130 container attach a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:10:15 np0005464214 systemd[1]: libpod-df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e.scope: Deactivated successfully.
Oct  1 09:10:15 np0005464214 podman[94234]: 2025-10-01 13:10:15.18419229 +0000 UTC m=+0.896691203 container died df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a-merged.mount: Deactivated successfully.
Oct  1 09:10:15 np0005464214 podman[94234]: 2025-10-01 13:10:15.24881397 +0000 UTC m=+0.961312883 container remove df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:15 np0005464214 systemd[1]: libpod-conmon-df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e.scope: Deactivated successfully.
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  1 09:10:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:15 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v56: 6 pgs: 2 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.850476876 +0000 UTC m=+0.047287566 container create f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:10:15 np0005464214 systemd[1]: Started libpod-conmon-f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a.scope.
Oct  1 09:10:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.91916441 +0000 UTC m=+0.115975180 container init f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.823418727 +0000 UTC m=+0.020229507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.926988119 +0000 UTC m=+0.123798859 container start f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:15 np0005464214 jovial_dubinsky[94530]: 167 167
Oct  1 09:10:15 np0005464214 systemd[1]: libpod-f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a.scope: Deactivated successfully.
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.933965855 +0000 UTC m=+0.130776545 container attach f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.934347256 +0000 UTC m=+0.131157946 container died f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:10:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f2b48295ced68d863e3d8dcfaffa10abb236cb2e428c5fcd98698d40fe90ad70-merged.mount: Deactivated successfully.
Oct  1 09:10:15 np0005464214 podman[94514]: 2025-10-01 13:10:15.976810025 +0000 UTC m=+0.173620715 container remove f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:15 np0005464214 systemd[1]: libpod-conmon-f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a.scope: Deactivated successfully.
Oct  1 09:10:16 np0005464214 podman[94553]: 2025-10-01 13:10:16.151819358 +0000 UTC m=+0.038488159 container create b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:16 np0005464214 systemd[1]: Started libpod-conmon-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope.
Oct  1 09:10:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:16 np0005464214 podman[94553]: 2025-10-01 13:10:16.133258918 +0000 UTC m=+0.019927739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:16 np0005464214 podman[94553]: 2025-10-01 13:10:16.234006781 +0000 UTC m=+0.120675632 container init b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:10:16 np0005464214 podman[94553]: 2025-10-01 13:10:16.243117305 +0000 UTC m=+0.129786096 container start b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:16 np0005464214 podman[94553]: 2025-10-01 13:10:16.248841636 +0000 UTC m=+0.135510437 container attach b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  1 09:10:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct  1 09:10:16 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  1 09:10:16 np0005464214 blissful_johnson[94335]: pool 'cephfs.cephfs.data' created
Oct  1 09:10:16 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct  1 09:10:16 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:16 np0005464214 systemd[1]: libpod-a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296.scope: Deactivated successfully.
Oct  1 09:10:16 np0005464214 podman[94316]: 2025-10-01 13:10:16.739992096 +0000 UTC m=+1.702143577 container died a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fae9acd5d3501a1d8943a5c0d522a00416aefdd2f607303d52e390a2da93cd02-merged.mount: Deactivated successfully.
Oct  1 09:10:16 np0005464214 podman[94316]: 2025-10-01 13:10:16.784569775 +0000 UTC m=+1.746721226 container remove a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:16 np0005464214 systemd[1]: libpod-conmon-a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296.scope: Deactivated successfully.
Oct  1 09:10:17 np0005464214 python3[94616]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:17 np0005464214 podman[94626]: 2025-10-01 13:10:17.16489882 +0000 UTC m=+0.050496576 container create fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:10:17 np0005464214 systemd[1]: Started libpod-conmon-fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21.scope.
Oct  1 09:10:17 np0005464214 podman[94626]: 2025-10-01 13:10:17.140936809 +0000 UTC m=+0.026534555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383ed642f3137dbcdc107e8cf21eb05f8ca1ce7fa63f87c013b2babe277d349e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383ed642f3137dbcdc107e8cf21eb05f8ca1ce7fa63f87c013b2babe277d349e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:17 np0005464214 podman[94626]: 2025-10-01 13:10:17.257172075 +0000 UTC m=+0.142769801 container init fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:17 np0005464214 podman[94626]: 2025-10-01 13:10:17.264293205 +0000 UTC m=+0.149890921 container start fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:10:17 np0005464214 podman[94626]: 2025-10-01 13:10:17.268132442 +0000 UTC m=+0.153730208 container attach fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]: {
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "osd_id": 0,
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "type": "bluestore"
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:    },
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "osd_id": 2,
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "type": "bluestore"
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:    },
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "osd_id": 1,
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:        "type": "bluestore"
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]:    }
Oct  1 09:10:17 np0005464214 beautiful_shirley[94569]: }
Oct  1 09:10:17 np0005464214 systemd[1]: libpod-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope: Deactivated successfully.
Oct  1 09:10:17 np0005464214 podman[94553]: 2025-10-01 13:10:17.322814254 +0000 UTC m=+1.209483075 container died b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:10:17 np0005464214 systemd[1]: libpod-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope: Consumed 1.074s CPU time.
Oct  1 09:10:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e-merged.mount: Deactivated successfully.
Oct  1 09:10:17 np0005464214 podman[94553]: 2025-10-01 13:10:17.400296175 +0000 UTC m=+1.286965006 container remove b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:17 np0005464214 systemd[1]: libpod-conmon-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope: Deactivated successfully.
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:17 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct  1 09:10:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  1 09:10:18 np0005464214 podman[94916]: 2025-10-01 13:10:18.523525013 +0000 UTC m=+0.087811692 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:10:18 np0005464214 podman[94916]: 2025-10-01 13:10:18.640102388 +0000 UTC m=+0.204388997 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  1 09:10:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  1 09:10:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct  1 09:10:18 np0005464214 unruffled_bardeen[94649]: enabled application 'rbd' on pool 'vms'
Oct  1 09:10:18 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct  1 09:10:18 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  1 09:10:18 np0005464214 systemd[1]: libpod-fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21.scope: Deactivated successfully.
Oct  1 09:10:18 np0005464214 podman[94626]: 2025-10-01 13:10:18.767449606 +0000 UTC m=+1.653047322 container died fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:10:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-383ed642f3137dbcdc107e8cf21eb05f8ca1ce7fa63f87c013b2babe277d349e-merged.mount: Deactivated successfully.
Oct  1 09:10:18 np0005464214 podman[94626]: 2025-10-01 13:10:18.814859384 +0000 UTC m=+1.700457100 container remove fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:18 np0005464214 systemd[1]: libpod-conmon-fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21.scope: Deactivated successfully.
Oct  1 09:10:19 np0005464214 python3[95043]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:19 np0005464214 podman[95065]: 2025-10-01 13:10:19.206515607 +0000 UTC m=+0.057566854 container create ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:19 np0005464214 systemd[1]: Started libpod-conmon-ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a.scope.
Oct  1 09:10:19 np0005464214 podman[95065]: 2025-10-01 13:10:19.178103181 +0000 UTC m=+0.029154468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede26fc9a0804c8ebc090aa045b55e049428b729189073d45ff04a3689d1b338/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede26fc9a0804c8ebc090aa045b55e049428b729189073d45ff04a3689d1b338/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:19 np0005464214 podman[95065]: 2025-10-01 13:10:19.30194531 +0000 UTC m=+0.152996587 container init ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:19 np0005464214 podman[95065]: 2025-10-01 13:10:19.311619691 +0000 UTC m=+0.162670928 container start ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:19 np0005464214 podman[95065]: 2025-10-01 13:10:19.315782048 +0000 UTC m=+0.166833305 container attach ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 87983113-12b6-4c0a-829f-e0939610e618 does not exist
Oct  1 09:10:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f5823776-cfe4-4ef1-9732-8ed65589213f does not exist
Oct  1 09:10:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fd3765b9-64d2-4d47-8b90-d3fdd0320e1b does not exist
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.601420886 +0000 UTC m=+0.046852124 container create 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  1 09:10:20 np0005464214 systemd[1]: Started libpod-conmon-4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f.scope.
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.580331485 +0000 UTC m=+0.025762723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.692993571 +0000 UTC m=+0.138424809 container init 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.700580783 +0000 UTC m=+0.146011981 container start 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:10:20 np0005464214 hungry_ride[95402]: 167 167
Oct  1 09:10:20 np0005464214 systemd[1]: libpod-4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f.scope: Deactivated successfully.
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.704606297 +0000 UTC m=+0.150037555 container attach 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.705690426 +0000 UTC m=+0.151121634 container died 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-409fbfc8faca09b15e36063b7a65c36634c84600a44616f236b5e471a9967712-merged.mount: Deactivated successfully.
Oct  1 09:10:20 np0005464214 podman[95386]: 2025-10-01 13:10:20.750183873 +0000 UTC m=+0.195615081 container remove 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:20 np0005464214 systemd[1]: libpod-conmon-4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f.scope: Deactivated successfully.
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct  1 09:10:20 np0005464214 relaxed_volhard[95092]: enabled application 'rbd' on pool 'volumes'
Oct  1 09:10:20 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct  1 09:10:20 np0005464214 systemd[1]: libpod-ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a.scope: Deactivated successfully.
Oct  1 09:10:20 np0005464214 podman[95065]: 2025-10-01 13:10:20.804582887 +0000 UTC m=+1.655634164 container died ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 09:10:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ede26fc9a0804c8ebc090aa045b55e049428b729189073d45ff04a3689d1b338-merged.mount: Deactivated successfully.
Oct  1 09:10:20 np0005464214 podman[95065]: 2025-10-01 13:10:20.861511603 +0000 UTC m=+1.712562840 container remove ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:10:20 np0005464214 systemd[1]: libpod-conmon-ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a.scope: Deactivated successfully.
Oct  1 09:10:20 np0005464214 podman[95437]: 2025-10-01 13:10:20.946707699 +0000 UTC m=+0.056937506 container create 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:10:20 np0005464214 systemd[1]: Started libpod-conmon-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope.
Oct  1 09:10:21 np0005464214 podman[95437]: 2025-10-01 13:10:20.921335238 +0000 UTC m=+0.031565115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 podman[95437]: 2025-10-01 13:10:21.044593401 +0000 UTC m=+0.154823248 container init 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:10:21 np0005464214 podman[95437]: 2025-10-01 13:10:21.057534964 +0000 UTC m=+0.167764761 container start 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:10:21 np0005464214 podman[95437]: 2025-10-01 13:10:21.061360891 +0000 UTC m=+0.171590698 container attach 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:10:21 np0005464214 python3[95482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:21 np0005464214 podman[95485]: 2025-10-01 13:10:21.275536262 +0000 UTC m=+0.051415322 container create 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:21 np0005464214 systemd[1]: Started libpod-conmon-40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23.scope.
Oct  1 09:10:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34e7f57e9b8ba021c373e3e23e870e00a71877898f59ddfbcf9bbdc181a03fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34e7f57e9b8ba021c373e3e23e870e00a71877898f59ddfbcf9bbdc181a03fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:21 np0005464214 podman[95485]: 2025-10-01 13:10:21.259184593 +0000 UTC m=+0.035063673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:21 np0005464214 podman[95485]: 2025-10-01 13:10:21.370547203 +0000 UTC m=+0.146426263 container init 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:10:21 np0005464214 podman[95485]: 2025-10-01 13:10:21.375932054 +0000 UTC m=+0.151811124 container start 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:10:21 np0005464214 podman[95485]: 2025-10-01 13:10:21.378876797 +0000 UTC m=+0.154755867 container attach 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:10:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:21 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  1 09:10:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct  1 09:10:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  1 09:10:22 np0005464214 wizardly_mendel[95475]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:10:22 np0005464214 wizardly_mendel[95475]: --> relative data size: 1.0
Oct  1 09:10:22 np0005464214 wizardly_mendel[95475]: --> All data devices are unavailable
Oct  1 09:10:22 np0005464214 systemd[1]: libpod-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope: Deactivated successfully.
Oct  1 09:10:22 np0005464214 systemd[1]: libpod-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope: Consumed 1.034s CPU time.
Oct  1 09:10:22 np0005464214 podman[95548]: 2025-10-01 13:10:22.177288745 +0000 UTC m=+0.027990036 container died 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  1 09:10:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b-merged.mount: Deactivated successfully.
Oct  1 09:10:22 np0005464214 podman[95548]: 2025-10-01 13:10:22.229758084 +0000 UTC m=+0.080459355 container remove 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:22 np0005464214 systemd[1]: libpod-conmon-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope: Deactivated successfully.
Oct  1 09:10:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  1 09:10:22 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  1 09:10:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  1 09:10:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct  1 09:10:22 np0005464214 lucid_williams[95500]: enabled application 'rbd' on pool 'backups'
Oct  1 09:10:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct  1 09:10:22 np0005464214 systemd[1]: libpod-40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23.scope: Deactivated successfully.
Oct  1 09:10:22 np0005464214 podman[95485]: 2025-10-01 13:10:22.839415894 +0000 UTC m=+1.615294994 container died 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a34e7f57e9b8ba021c373e3e23e870e00a71877898f59ddfbcf9bbdc181a03fe-merged.mount: Deactivated successfully.
Oct  1 09:10:22 np0005464214 podman[95485]: 2025-10-01 13:10:22.910911927 +0000 UTC m=+1.686790997 container remove 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:22 np0005464214 systemd[1]: libpod-conmon-40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23.scope: Deactivated successfully.
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.036808454 +0000 UTC m=+0.069712123 container create fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:23 np0005464214 systemd[1]: Started libpod-conmon-fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52.scope.
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.008169062 +0000 UTC m=+0.041072791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.129523792 +0000 UTC m=+0.162427461 container init fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.139591644 +0000 UTC m=+0.172495283 container start fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.143336769 +0000 UTC m=+0.176240408 container attach fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:10:23 np0005464214 peaceful_cori[95751]: 167 167
Oct  1 09:10:23 np0005464214 systemd[1]: libpod-fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52.scope: Deactivated successfully.
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.147932667 +0000 UTC m=+0.180836306 container died fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:10:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-25a77614ccb1a56593d58ff0dbb89205320cc0fffc0f424879ad52ca97696da9-merged.mount: Deactivated successfully.
Oct  1 09:10:23 np0005464214 podman[95715]: 2025-10-01 13:10:23.19048439 +0000 UTC m=+0.223388029 container remove fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:23 np0005464214 systemd[1]: libpod-conmon-fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52.scope: Deactivated successfully.
Oct  1 09:10:23 np0005464214 python3[95759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:23 np0005464214 podman[95777]: 2025-10-01 13:10:23.393808605 +0000 UTC m=+0.079707163 container create b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:23 np0005464214 podman[95777]: 2025-10-01 13:10:23.339691179 +0000 UTC m=+0.025589717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:23 np0005464214 podman[95785]: 2025-10-01 13:10:23.439049933 +0000 UTC m=+0.111953947 container create af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:10:23 np0005464214 systemd[1]: Started libpod-conmon-b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca.scope.
Oct  1 09:10:23 np0005464214 podman[95785]: 2025-10-01 13:10:23.35720052 +0000 UTC m=+0.030104534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176131b6e05374f4257997c5f5f93b0d63c3890a0ebddbcc24e029ec34140c6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176131b6e05374f4257997c5f5f93b0d63c3890a0ebddbcc24e029ec34140c6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:23 np0005464214 systemd[1]: Started libpod-conmon-af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a.scope.
Oct  1 09:10:23 np0005464214 podman[95777]: 2025-10-01 13:10:23.504679552 +0000 UTC m=+0.190578090 container init b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:23 np0005464214 podman[95777]: 2025-10-01 13:10:23.51745613 +0000 UTC m=+0.203354658 container start b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:23 np0005464214 podman[95777]: 2025-10-01 13:10:23.521870713 +0000 UTC m=+0.207769231 container attach b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:10:23 np0005464214 podman[95785]: 2025-10-01 13:10:23.532227434 +0000 UTC m=+0.205131478 container init af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:23 np0005464214 podman[95785]: 2025-10-01 13:10:23.545523436 +0000 UTC m=+0.218427410 container start af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:10:23 np0005464214 podman[95785]: 2025-10-01 13:10:23.548915881 +0000 UTC m=+0.221819895 container attach af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct  1 09:10:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:23 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]: {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:    "0": [
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:        {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "devices": [
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "/dev/loop3"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            ],
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_name": "ceph_lv0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_size": "21470642176",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "name": "ceph_lv0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "tags": {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.crush_device_class": "",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.encrypted": "0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osd_id": "0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.type": "block",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.vdo": "0"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            },
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "type": "block",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "vg_name": "ceph_vg0"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:        }
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:    ],
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:    "1": [
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:        {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "devices": [
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "/dev/loop4"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            ],
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_name": "ceph_lv1",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_size": "21470642176",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "name": "ceph_lv1",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "tags": {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.crush_device_class": "",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.encrypted": "0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osd_id": "1",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.type": "block",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.vdo": "0"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            },
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "type": "block",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "vg_name": "ceph_vg1"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:        }
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:    ],
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:    "2": [
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:        {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "devices": [
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "/dev/loop5"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            ],
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_name": "ceph_lv2",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_size": "21470642176",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "name": "ceph_lv2",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "tags": {
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.crush_device_class": "",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.encrypted": "0",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osd_id": "2",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.type": "block",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:                "ceph.vdo": "0"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            },
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "type": "block",
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:            "vg_name": "ceph_vg2"
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:        }
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]:    ]
Oct  1 09:10:24 np0005464214 priceless_lewin[95813]: }
Oct  1 09:10:24 np0005464214 systemd[1]: libpod-af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a.scope: Deactivated successfully.
Oct  1 09:10:24 np0005464214 podman[95785]: 2025-10-01 13:10:24.277943065 +0000 UTC m=+0.950847069 container died af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:10:24 np0005464214 systemd[1]: var-lib-containers-storage-overlay-06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11-merged.mount: Deactivated successfully.
Oct  1 09:10:24 np0005464214 podman[95785]: 2025-10-01 13:10:24.344247393 +0000 UTC m=+1.017151397 container remove af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:24 np0005464214 systemd[1]: libpod-conmon-af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a.scope: Deactivated successfully.
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct  1 09:10:24 np0005464214 zealous_sinoussi[95808]: enabled application 'rbd' on pool 'images'
Oct  1 09:10:24 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct  1 09:10:25 np0005464214 systemd[1]: libpod-b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca.scope: Deactivated successfully.
Oct  1 09:10:25 np0005464214 podman[95777]: 2025-10-01 13:10:25.005724284 +0000 UTC m=+1.691622842 container died b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 09:10:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-176131b6e05374f4257997c5f5f93b0d63c3890a0ebddbcc24e029ec34140c6d-merged.mount: Deactivated successfully.
Oct  1 09:10:25 np0005464214 podman[95777]: 2025-10-01 13:10:25.070354235 +0000 UTC m=+1.756252793 container remove b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.075879199 +0000 UTC m=+0.082867482 container create 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:10:25 np0005464214 systemd[1]: libpod-conmon-b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca.scope: Deactivated successfully.
Oct  1 09:10:25 np0005464214 systemd[1]: Started libpod-conmon-9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0.scope.
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.025663433 +0000 UTC m=+0.032651766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.243792293 +0000 UTC m=+0.250780576 container init 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.254374781 +0000 UTC m=+0.261363034 container start 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:25 np0005464214 affectionate_poitras[96029]: 167 167
Oct  1 09:10:25 np0005464214 systemd[1]: libpod-9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0.scope: Deactivated successfully.
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.340269987 +0000 UTC m=+0.347258270 container attach 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.341362087 +0000 UTC m=+0.348350400 container died 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0491385ffe27c5b81cbfdae6ff9e59f8f0e5133a97fc1c704adf0180782cff2c-merged.mount: Deactivated successfully.
Oct  1 09:10:25 np0005464214 podman[96001]: 2025-10-01 13:10:25.401697848 +0000 UTC m=+0.408686121 container remove 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:25 np0005464214 systemd[1]: libpod-conmon-9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0.scope: Deactivated successfully.
Oct  1 09:10:25 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:25 np0005464214 python3[96060]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:25 np0005464214 podman[96076]: 2025-10-01 13:10:25.586902756 +0000 UTC m=+0.098843860 container create a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:25 np0005464214 podman[96076]: 2025-10-01 13:10:25.535883407 +0000 UTC m=+0.047824551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:25 np0005464214 systemd[1]: Started libpod-conmon-a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc.scope.
Oct  1 09:10:25 np0005464214 podman[96094]: 2025-10-01 13:10:25.676218109 +0000 UTC m=+0.097308857 container create 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:10:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a805ce5f7208cb547f66bfabf08a26109425c1235bfcb4aef1c3663d096e837d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a805ce5f7208cb547f66bfabf08a26109425c1235bfcb4aef1c3663d096e837d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:25 np0005464214 systemd[1]: Started libpod-conmon-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope.
Oct  1 09:10:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:25 np0005464214 podman[96094]: 2025-10-01 13:10:25.647868204 +0000 UTC m=+0.068959052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:25 np0005464214 podman[96094]: 2025-10-01 13:10:25.74052446 +0000 UTC m=+0.161615258 container init 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:10:25 np0005464214 podman[96076]: 2025-10-01 13:10:25.743808792 +0000 UTC m=+0.255749876 container init a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:25 np0005464214 podman[96076]: 2025-10-01 13:10:25.750616522 +0000 UTC m=+0.262557596 container start a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:25 np0005464214 podman[96076]: 2025-10-01 13:10:25.754880452 +0000 UTC m=+0.266821546 container attach a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:10:25 np0005464214 podman[96094]: 2025-10-01 13:10:25.76371086 +0000 UTC m=+0.184801648 container start 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:10:25 np0005464214 podman[96094]: 2025-10-01 13:10:25.768497614 +0000 UTC m=+0.189588452 container attach 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:10:25 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  1 09:10:25 np0005464214 ceph-mon[74802]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:10:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct  1 09:10:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]: {
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "osd_id": 0,
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "type": "bluestore"
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:    },
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "osd_id": 2,
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "type": "bluestore"
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:    },
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "osd_id": 1,
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:        "type": "bluestore"
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]:    }
Oct  1 09:10:26 np0005464214 friendly_mestorf[96115]: }
Oct  1 09:10:26 np0005464214 systemd[1]: libpod-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope: Deactivated successfully.
Oct  1 09:10:26 np0005464214 podman[96094]: 2025-10-01 13:10:26.868143621 +0000 UTC m=+1.289234409 container died 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:26 np0005464214 systemd[1]: libpod-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope: Consumed 1.115s CPU time.
Oct  1 09:10:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31-merged.mount: Deactivated successfully.
Oct  1 09:10:26 np0005464214 podman[96094]: 2025-10-01 13:10:26.940434396 +0000 UTC m=+1.361525184 container remove 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:10:26 np0005464214 systemd[1]: libpod-conmon-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope: Deactivated successfully.
Oct  1 09:10:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct  1 09:10:27 np0005464214 determined_hoover[96110]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  1 09:10:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct  1 09:10:27 np0005464214 systemd[1]: libpod-a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc.scope: Deactivated successfully.
Oct  1 09:10:27 np0005464214 podman[96076]: 2025-10-01 13:10:27.048844034 +0000 UTC m=+1.560785128 container died a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a805ce5f7208cb547f66bfabf08a26109425c1235bfcb4aef1c3663d096e837d-merged.mount: Deactivated successfully.
Oct  1 09:10:27 np0005464214 podman[96076]: 2025-10-01 13:10:27.198184557 +0000 UTC m=+1.710125631 container remove a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:10:27 np0005464214 systemd[1]: libpod-conmon-a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc.scope: Deactivated successfully.
Oct  1 09:10:27 np0005464214 python3[96268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:27 np0005464214 podman[96269]: 2025-10-01 13:10:27.640685834 +0000 UTC m=+0.022801040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:27 np0005464214 podman[96269]: 2025-10-01 13:10:27.750607893 +0000 UTC m=+0.132723139 container create c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:10:27 np0005464214 systemd[1]: Started libpod-conmon-c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1.scope.
Oct  1 09:10:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e3846d215df75906e7da133f9234c707583d7529d7fa3167b22dd8c96b5ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e3846d215df75906e7da133f9234c707583d7529d7fa3167b22dd8c96b5ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:27 np0005464214 podman[96269]: 2025-10-01 13:10:27.901683836 +0000 UTC m=+0.283799112 container init c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:10:27 np0005464214 podman[96269]: 2025-10-01 13:10:27.912404586 +0000 UTC m=+0.294519832 container start c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:10:27 np0005464214 podman[96269]: 2025-10-01 13:10:27.921151631 +0000 UTC m=+0.303266847 container attach c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:10:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:28 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  1 09:10:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct  1 09:10:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  1 09:10:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  1 09:10:29 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  1 09:10:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  1 09:10:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct  1 09:10:29 np0005464214 clever_khayyam[96285]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  1 09:10:29 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct  1 09:10:29 np0005464214 systemd[1]: libpod-c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1.scope: Deactivated successfully.
Oct  1 09:10:29 np0005464214 podman[96269]: 2025-10-01 13:10:29.300537776 +0000 UTC m=+1.682653012 container died c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:10:29 np0005464214 systemd[1]: var-lib-containers-storage-overlay-954e3846d215df75906e7da133f9234c707583d7529d7fa3167b22dd8c96b5ed-merged.mount: Deactivated successfully.
Oct  1 09:10:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:29 np0005464214 podman[96269]: 2025-10-01 13:10:29.875663718 +0000 UTC m=+2.257778964 container remove c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:29 np0005464214 systemd[1]: libpod-conmon-c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1.scope: Deactivated successfully.
Oct  1 09:10:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 09:10:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 09:10:30 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  1 09:10:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:30 np0005464214 python3[96401]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:10:31 np0005464214 python3[96472]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324230.557655-33861-14683110072787/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:10:31 np0005464214 ceph-mon[74802]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 09:10:31 np0005464214 ceph-mon[74802]: Cluster is now healthy
Oct  1 09:10:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:32 np0005464214 python3[96574]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:10:32 np0005464214 python3[96649]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324231.6416728-33875-256645427039545/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=897ffa25907ca0d218e2daaa59ac7825cb09ab42 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:10:32 np0005464214 python3[96699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:33 np0005464214 podman[96700]: 2025-10-01 13:10:33.011876911 +0000 UTC m=+0.119773228 container create 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:10:33 np0005464214 podman[96700]: 2025-10-01 13:10:32.929866873 +0000 UTC m=+0.037763170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:33 np0005464214 systemd[1]: Started libpod-conmon-042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e.scope.
Oct  1 09:10:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:33 np0005464214 podman[96700]: 2025-10-01 13:10:33.235913237 +0000 UTC m=+0.343809604 container init 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:10:33 np0005464214 podman[96700]: 2025-10-01 13:10:33.244686272 +0000 UTC m=+0.352582579 container start 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:33 np0005464214 podman[96700]: 2025-10-01 13:10:33.36488708 +0000 UTC m=+0.472783457 container attach 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  1 09:10:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 09:10:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 09:10:33 np0005464214 musing_germain[96716]: 
Oct  1 09:10:33 np0005464214 musing_germain[96716]: [global]
Oct  1 09:10:33 np0005464214 musing_germain[96716]: #011fsid = eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct  1 09:10:33 np0005464214 musing_germain[96716]: #011mon_host = 192.168.122.100
Oct  1 09:10:33 np0005464214 systemd[1]: libpod-042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e.scope: Deactivated successfully.
Oct  1 09:10:33 np0005464214 podman[96700]: 2025-10-01 13:10:33.881249877 +0000 UTC m=+0.989146184 container died 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:10:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88-merged.mount: Deactivated successfully.
Oct  1 09:10:34 np0005464214 podman[96700]: 2025-10-01 13:10:34.164130472 +0000 UTC m=+1.272026749 container remove 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:10:34 np0005464214 systemd[1]: libpod-conmon-042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e.scope: Deactivated successfully.
Oct  1 09:10:34 np0005464214 python3[96904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:34 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  1 09:10:34 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  1 09:10:34 np0005464214 podman[96937]: 2025-10-01 13:10:34.658636895 +0000 UTC m=+0.035478345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:34 np0005464214 podman[96937]: 2025-10-01 13:10:34.834540994 +0000 UTC m=+0.211382384 container create 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:35 np0005464214 systemd[1]: Started libpod-conmon-3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80.scope.
Oct  1 09:10:35 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:35 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:35 np0005464214 podman[96963]: 2025-10-01 13:10:35.403236685 +0000 UTC m=+0.686463822 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:10:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:35 np0005464214 podman[96937]: 2025-10-01 13:10:35.556503529 +0000 UTC m=+0.933344969 container init 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:35 np0005464214 podman[96963]: 2025-10-01 13:10:35.564160674 +0000 UTC m=+0.847387771 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:35 np0005464214 podman[96937]: 2025-10-01 13:10:35.570842441 +0000 UTC m=+0.947683831 container start 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:10:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:35 np0005464214 podman[96937]: 2025-10-01 13:10:35.799315502 +0000 UTC m=+1.176156892 container attach 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3256476136' entity='client.admin' 
Oct  1 09:10:36 np0005464214 trusting_buck[96982]: set ssl_option
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:36 np0005464214 podman[96937]: 2025-10-01 13:10:36.335718459 +0000 UTC m=+1.712559819 container died 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:36 np0005464214 systemd[1]: libpod-3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80.scope: Deactivated successfully.
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:10:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe-merged.mount: Deactivated successfully.
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:36 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a037598a-2cc6-437a-a366-dfdb0a649ade does not exist
Oct  1 09:10:36 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d1f1b12b-645f-4ff5-9f21-c228683686cf does not exist
Oct  1 09:10:36 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5c95ccf9-1e6f-493a-ae24-428ecf736505 does not exist
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:37 np0005464214 podman[96937]: 2025-10-01 13:10:37.245082259 +0000 UTC m=+2.621923609 container remove 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:10:37 np0005464214 systemd[1]: libpod-conmon-3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80.scope: Deactivated successfully.
Oct  1 09:10:37 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3256476136' entity='client.admin' 
Oct  1 09:10:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:37 np0005464214 podman[97294]: 2025-10-01 13:10:37.574415904 +0000 UTC m=+0.078663158 container create abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:37 np0005464214 podman[97294]: 2025-10-01 13:10:37.517294773 +0000 UTC m=+0.021542107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:37 np0005464214 python3[97289]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:37 np0005464214 systemd[1]: Started libpod-conmon-abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60.scope.
Oct  1 09:10:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:37 np0005464214 podman[97308]: 2025-10-01 13:10:37.724574319 +0000 UTC m=+0.038948017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:37 np0005464214 podman[97294]: 2025-10-01 13:10:37.859291775 +0000 UTC m=+0.363539099 container init abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:37 np0005464214 podman[97294]: 2025-10-01 13:10:37.871411824 +0000 UTC m=+0.375659058 container start abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:37 np0005464214 gracious_mccarthy[97322]: 167 167
Oct  1 09:10:37 np0005464214 systemd[1]: libpod-abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60.scope: Deactivated successfully.
Oct  1 09:10:37 np0005464214 podman[97294]: 2025-10-01 13:10:37.972219906 +0000 UTC m=+0.476467170 container attach abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:10:37 np0005464214 podman[97294]: 2025-10-01 13:10:37.972535586 +0000 UTC m=+0.476782820 container died abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:38 np0005464214 podman[97308]: 2025-10-01 13:10:38.225359709 +0000 UTC m=+0.539733397 container create c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:10:38 np0005464214 systemd[1]: Started libpod-conmon-c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1.scope.
Oct  1 09:10:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-be3c959f10c45a45791b9a8e862a0109016ba0ce91054c95fefd280c7f7da204-merged.mount: Deactivated successfully.
Oct  1 09:10:38 np0005464214 podman[97294]: 2025-10-01 13:10:38.795947066 +0000 UTC m=+1.300194360 container remove abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:10:39 np0005464214 podman[97308]: 2025-10-01 13:10:39.040609441 +0000 UTC m=+1.354983139 container init c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:39 np0005464214 podman[97308]: 2025-10-01 13:10:39.051993418 +0000 UTC m=+1.366367106 container start c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:10:39 np0005464214 podman[97308]: 2025-10-01 13:10:39.087445168 +0000 UTC m=+1.401818836 container attach c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:39 np0005464214 podman[97351]: 2025-10-01 13:10:39.061447426 +0000 UTC m=+0.092556941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:39 np0005464214 podman[97351]: 2025-10-01 13:10:39.174536052 +0000 UTC m=+0.205645587 container create df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:10:39 np0005464214 systemd[1]: Started libpod-conmon-df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602.scope.
Oct  1 09:10:39 np0005464214 systemd[1]: libpod-conmon-abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60.scope: Deactivated successfully.
Oct  1 09:10:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:39 np0005464214 podman[97351]: 2025-10-01 13:10:39.351063931 +0000 UTC m=+0.382173526 container init df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:10:39 np0005464214 podman[97351]: 2025-10-01 13:10:39.361608162 +0000 UTC m=+0.392717667 container start df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:10:39 np0005464214 podman[97351]: 2025-10-01 13:10:39.378666012 +0000 UTC m=+0.409775597 container attach df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:39 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:10:39 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct  1 09:10:39 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  1 09:10:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 09:10:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:39 np0005464214 ecstatic_stonebraker[97341]: Scheduled rgw.rgw update...
Oct  1 09:10:39 np0005464214 systemd[1]: libpod-c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1.scope: Deactivated successfully.
Oct  1 09:10:39 np0005464214 podman[97308]: 2025-10-01 13:10:39.685866043 +0000 UTC m=+2.000239721 container died c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:39 np0005464214 systemd[1]: var-lib-containers-storage-overlay-417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e-merged.mount: Deactivated successfully.
Oct  1 09:10:40 np0005464214 podman[97308]: 2025-10-01 13:10:40.217565315 +0000 UTC m=+2.531939003 container remove c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  1 09:10:40 np0005464214 systemd[1]: libpod-conmon-c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1.scope: Deactivated successfully.
Oct  1 09:10:40 np0005464214 priceless_leavitt[97369]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:10:40 np0005464214 priceless_leavitt[97369]: --> relative data size: 1.0
Oct  1 09:10:40 np0005464214 priceless_leavitt[97369]: --> All data devices are unavailable
Oct  1 09:10:40 np0005464214 systemd[1]: libpod-df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602.scope: Deactivated successfully.
Oct  1 09:10:40 np0005464214 podman[97351]: 2025-10-01 13:10:40.416741524 +0000 UTC m=+1.447851019 container died df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:40 np0005464214 ceph-mon[74802]: Saving service rgw.rgw spec with placement compute-0
Oct  1 09:10:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0-merged.mount: Deactivated successfully.
Oct  1 09:10:41 np0005464214 podman[97351]: 2025-10-01 13:10:41.068519185 +0000 UTC m=+2.099628690 container remove df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:10:41 np0005464214 systemd[1]: libpod-conmon-df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602.scope: Deactivated successfully.
Oct  1 09:10:41 np0005464214 python3[97518]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:10:41 np0005464214 python3[97689]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324240.9658973-33916-121971589978421/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:10:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.780410726 +0000 UTC m=+0.066778416 container create caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:41 np0005464214 systemd[1]: Started libpod-conmon-caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e.scope.
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.752947619 +0000 UTC m=+0.039315349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:41 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.869398018 +0000 UTC m=+0.155765688 container init caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.875628167 +0000 UTC m=+0.161995817 container start caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.879742303 +0000 UTC m=+0.166109983 container attach caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:10:41 np0005464214 affectionate_kowalevski[97769]: 167 167
Oct  1 09:10:41 np0005464214 systemd[1]: libpod-caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e.scope: Deactivated successfully.
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.880634621 +0000 UTC m=+0.167002271 container died caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:41 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6d77f45d999d3097a957b1249aebb13d063785b0e635a24e8609aa8b46337d40-merged.mount: Deactivated successfully.
Oct  1 09:10:41 np0005464214 podman[97753]: 2025-10-01 13:10:41.910089578 +0000 UTC m=+0.196457218 container remove caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:10:41 np0005464214 systemd[1]: libpod-conmon-caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e.scope: Deactivated successfully.
Oct  1 09:10:42 np0005464214 python3[97806]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:42 np0005464214 podman[97817]: 2025-10-01 13:10:42.075671884 +0000 UTC m=+0.032375658 container create 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:10:42 np0005464214 systemd[1]: Started libpod-conmon-8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2.scope.
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.107465323 +0000 UTC m=+0.039503935 container create a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:42 np0005464214 systemd[1]: Started libpod-conmon-a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0.scope.
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:42 np0005464214 podman[97817]: 2025-10-01 13:10:42.150526384 +0000 UTC m=+0.107230178 container init 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:42 np0005464214 podman[97817]: 2025-10-01 13:10:42.062333037 +0000 UTC m=+0.019036841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.164536781 +0000 UTC m=+0.096575413 container init a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:10:42 np0005464214 podman[97817]: 2025-10-01 13:10:42.166025837 +0000 UTC m=+0.122729621 container start 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:10:42 np0005464214 podman[97817]: 2025-10-01 13:10:42.168943686 +0000 UTC m=+0.125647480 container attach 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.172259037 +0000 UTC m=+0.104297649 container start a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.174946849 +0000 UTC m=+0.106985461 container attach a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.090143195 +0000 UTC m=+0.022181847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:42 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:10:42 np0005464214 ceph-mgr[75103]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  1 09:10:42 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0[74798]: 2025-10-01T13:10:42.679+0000 7fa515793640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e2 new map
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T13:10:42.681473+0000#012modified#0112025-10-01T13:10:42.681508+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  1 09:10:42 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  1 09:10:42 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 09:10:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:42 np0005464214 ceph-mgr[75103]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  1 09:10:42 np0005464214 systemd[1]: libpod-a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0.scope: Deactivated successfully.
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.717262234 +0000 UTC m=+0.649300846 container died a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:10:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d-merged.mount: Deactivated successfully.
Oct  1 09:10:42 np0005464214 podman[97828]: 2025-10-01 13:10:42.762396489 +0000 UTC m=+0.694435141 container remove a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:42 np0005464214 systemd[1]: libpod-conmon-a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0.scope: Deactivated successfully.
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]: {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:    "0": [
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:        {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "devices": [
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "/dev/loop3"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            ],
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_name": "ceph_lv0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_size": "21470642176",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "name": "ceph_lv0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "tags": {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.crush_device_class": "",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.encrypted": "0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osd_id": "0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.type": "block",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.vdo": "0"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            },
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "type": "block",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "vg_name": "ceph_vg0"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:        }
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:    ],
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:    "1": [
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:        {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "devices": [
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "/dev/loop4"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            ],
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_name": "ceph_lv1",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_size": "21470642176",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "name": "ceph_lv1",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "tags": {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.crush_device_class": "",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.encrypted": "0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osd_id": "1",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.type": "block",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.vdo": "0"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            },
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "type": "block",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "vg_name": "ceph_vg1"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:        }
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:    ],
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:    "2": [
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:        {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "devices": [
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "/dev/loop5"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            ],
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_name": "ceph_lv2",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_size": "21470642176",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "name": "ceph_lv2",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "tags": {
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.crush_device_class": "",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.encrypted": "0",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osd_id": "2",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.type": "block",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:                "ceph.vdo": "0"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            },
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "type": "block",
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:            "vg_name": "ceph_vg2"
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:        }
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]:    ]
Oct  1 09:10:42 np0005464214 sweet_galileo[97844]: }
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:43 np0005464214 systemd[1]: libpod-8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2.scope: Deactivated successfully.
Oct  1 09:10:43 np0005464214 podman[97817]: 2025-10-01 13:10:43.027561739 +0000 UTC m=+0.984265553 container died 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:10:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e-merged.mount: Deactivated successfully.
Oct  1 09:10:43 np0005464214 podman[97817]: 2025-10-01 13:10:43.100515402 +0000 UTC m=+1.057219186 container remove 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:43 np0005464214 systemd[1]: libpod-conmon-8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2.scope: Deactivated successfully.
Oct  1 09:10:43 np0005464214 python3[97922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.222363284 +0000 UTC m=+0.052529591 container create 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:10:43 np0005464214 systemd[1]: Started libpod-conmon-02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8.scope.
Oct  1 09:10:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.200702685 +0000 UTC m=+0.030868982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.318315438 +0000 UTC m=+0.148481715 container init 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.326923821 +0000 UTC m=+0.157090078 container start 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.33017084 +0000 UTC m=+0.160337117 container attach 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:10:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.795938363 +0000 UTC m=+0.062263979 container create 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:10:43 np0005464214 systemd[1]: Started libpod-conmon-45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c.scope.
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.765152895 +0000 UTC m=+0.031478621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:43 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 09:10:43 np0005464214 ceph-mgr[75103]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  1 09:10:43 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.886228373 +0000 UTC m=+0.152553999 container init 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 09:10:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.896537108 +0000 UTC m=+0.162862714 container start 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:43 np0005464214 sad_bhaskara[97998]: Scheduled mds.cephfs update...
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.90022388 +0000 UTC m=+0.166549506 container attach 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:10:43 np0005464214 relaxed_haslett[98127]: 167 167
Oct  1 09:10:43 np0005464214 systemd[1]: libpod-45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c.scope: Deactivated successfully.
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.90382232 +0000 UTC m=+0.170147956 container died 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:10:43 np0005464214 systemd[1]: libpod-02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8.scope: Deactivated successfully.
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.913360991 +0000 UTC m=+0.743527248 container died 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-084ab18a98a0a931192117a82e299d17e845a3e5ba818280d3a8e536d8db2ff3-merged.mount: Deactivated successfully.
Oct  1 09:10:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb-merged.mount: Deactivated successfully.
Oct  1 09:10:43 np0005464214 podman[98110]: 2025-10-01 13:10:43.964853789 +0000 UTC m=+0.231179435 container remove 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:43 np0005464214 podman[97941]: 2025-10-01 13:10:43.976115622 +0000 UTC m=+0.806281879 container remove 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:10:43 np0005464214 systemd[1]: libpod-conmon-02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8.scope: Deactivated successfully.
Oct  1 09:10:43 np0005464214 systemd[1]: libpod-conmon-45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c.scope: Deactivated successfully.
Oct  1 09:10:44 np0005464214 ceph-mon[74802]: Saving service mds.cephfs spec with placement compute-0
Oct  1 09:10:44 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:44 np0005464214 podman[98166]: 2025-10-01 13:10:44.201903603 +0000 UTC m=+0.061229477 container create f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:10:44 np0005464214 systemd[1]: Started libpod-conmon-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope.
Oct  1 09:10:44 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:44 np0005464214 podman[98166]: 2025-10-01 13:10:44.183514512 +0000 UTC m=+0.042840356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:44 np0005464214 podman[98166]: 2025-10-01 13:10:44.300327461 +0000 UTC m=+0.159653365 container init f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:44 np0005464214 podman[98166]: 2025-10-01 13:10:44.313465852 +0000 UTC m=+0.172791726 container start f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:10:44 np0005464214 podman[98166]: 2025-10-01 13:10:44.318137585 +0000 UTC m=+0.177463479 container attach f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:10:44 np0005464214 python3[98264]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  1 09:10:45 np0005464214 ceph-mon[74802]: Saving service mds.cephfs spec with placement compute-0
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]: {
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "osd_id": 0,
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "type": "bluestore"
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:    },
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "osd_id": 2,
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "type": "bluestore"
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:    },
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "osd_id": 1,
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:        "type": "bluestore"
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]:    }
Oct  1 09:10:45 np0005464214 hungry_swirles[98182]: }
Oct  1 09:10:45 np0005464214 python3[98353]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324244.4685662-33946-161045783604869/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=cb7a726d0a2db4bead6fc30d6d9fab3edee0b4fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:10:45 np0005464214 systemd[1]: libpod-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope: Deactivated successfully.
Oct  1 09:10:45 np0005464214 systemd[1]: libpod-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope: Consumed 1.026s CPU time.
Oct  1 09:10:45 np0005464214 podman[98366]: 2025-10-01 13:10:45.375973338 +0000 UTC m=+0.029320534 container died f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:10:45 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8-merged.mount: Deactivated successfully.
Oct  1 09:10:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:45 np0005464214 podman[98366]: 2025-10-01 13:10:45.438150203 +0000 UTC m=+0.091497299 container remove f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:10:45 np0005464214 systemd[1]: libpod-conmon-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope: Deactivated successfully.
Oct  1 09:10:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:45 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:45 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:45 np0005464214 python3[98501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:45 np0005464214 podman[98537]: 2025-10-01 13:10:45.945291956 +0000 UTC m=+0.059881336 container create 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:45 np0005464214 systemd[1]: Started libpod-conmon-7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234.scope.
Oct  1 09:10:46 np0005464214 podman[98537]: 2025-10-01 13:10:45.922198492 +0000 UTC m=+0.036787842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:46 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632ac7034bc66aa6d0b6606b2ecd0f10f77fe1701ee320328c75959fc4d6a0e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632ac7034bc66aa6d0b6606b2ecd0f10f77fe1701ee320328c75959fc4d6a0e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:46 np0005464214 podman[98537]: 2025-10-01 13:10:46.05013196 +0000 UTC m=+0.164721390 container init 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:10:46 np0005464214 podman[98537]: 2025-10-01 13:10:46.063308033 +0000 UTC m=+0.177897413 container start 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:46 np0005464214 podman[98537]: 2025-10-01 13:10:46.068178991 +0000 UTC m=+0.182768451 container attach 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:46 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:46 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:46 np0005464214 podman[98689]: 2025-10-01 13:10:46.653840437 +0000 UTC m=+0.079224075 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:10:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct  1 09:10:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  1 09:10:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  1 09:10:46 np0005464214 systemd[1]: libpod-7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234.scope: Deactivated successfully.
Oct  1 09:10:46 np0005464214 podman[98537]: 2025-10-01 13:10:46.678514239 +0000 UTC m=+0.793103619 container died 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  1 09:10:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d632ac7034bc66aa6d0b6606b2ecd0f10f77fe1701ee320328c75959fc4d6a0e-merged.mount: Deactivated successfully.
Oct  1 09:10:46 np0005464214 podman[98537]: 2025-10-01 13:10:46.737101214 +0000 UTC m=+0.851690564 container remove 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 09:10:46 np0005464214 systemd[1]: libpod-conmon-7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234.scope: Deactivated successfully.
Oct  1 09:10:46 np0005464214 podman[98689]: 2025-10-01 13:10:46.790077628 +0000 UTC m=+0.215461266 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bc74f53b-49f5-4c31-81bc-9acfbd9306bd does not exist
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev eeff341c-919b-4639-ae9b-8b83bf6db126 does not exist
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a61f3cc-7877-4a8c-b8f7-25702c0380a2 does not exist
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:47 np0005464214 python3[98849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:47 np0005464214 podman[98876]: 2025-10-01 13:10:47.584923768 +0000 UTC m=+0.059430772 container create e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  1 09:10:47 np0005464214 systemd[1]: Started libpod-conmon-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope.
Oct  1 09:10:47 np0005464214 podman[98876]: 2025-10-01 13:10:47.565908149 +0000 UTC m=+0.040415173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0601b66e012301850648ec50d68764f5684f59cc926b12338750d7d00c6020/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0601b66e012301850648ec50d68764f5684f59cc926b12338750d7d00c6020/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:47 np0005464214 podman[98876]: 2025-10-01 13:10:47.699146828 +0000 UTC m=+0.173653912 container init e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:10:47 np0005464214 podman[98876]: 2025-10-01 13:10:47.709536776 +0000 UTC m=+0.184043810 container start e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:10:47 np0005464214 podman[98876]: 2025-10-01 13:10:47.713470485 +0000 UTC m=+0.187977519 container attach e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:10:47
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta']
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:10:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:10:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.187467348 +0000 UTC m=+0.051778119 container create 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:48 np0005464214 systemd[1]: Started libpod-conmon-48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752.scope.
Oct  1 09:10:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.170300735 +0000 UTC m=+0.034611526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.269473568 +0000 UTC m=+0.133784429 container init 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.275041537 +0000 UTC m=+0.139352308 container start 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:10:48 np0005464214 quizzical_kapitsa[99048]: 167 167
Oct  1 09:10:48 np0005464214 systemd[1]: libpod-48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752.scope: Deactivated successfully.
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.280537475 +0000 UTC m=+0.144848276 container attach 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.280783362 +0000 UTC m=+0.145094153 container died 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:10:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3c0f76f56f0670c64048d8d2975703eb685f42ed1ec26b94ab78cda00fec1563-merged.mount: Deactivated successfully.
Oct  1 09:10:48 np0005464214 podman[99032]: 2025-10-01 13:10:48.31712329 +0000 UTC m=+0.181434071 container remove 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:10:48 np0005464214 systemd[1]: libpod-conmon-48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752.scope: Deactivated successfully.
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821532168' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 09:10:48 np0005464214 amazing_sanderson[98925]: 
Oct  1 09:10:48 np0005464214 amazing_sanderson[98925]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":167,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":30,"num_osds":3,"num_up_osds":3,"osd_up_since":1759324211,"num_in_osds":3,"osd_in_since":1759324184,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83767296,"bytes_avail":64328159232,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T13:09:49.717098+0000","services":{}},"progress_events":{}}
Oct  1 09:10:48 np0005464214 systemd[1]: libpod-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope: Deactivated successfully.
Oct  1 09:10:48 np0005464214 conmon[98925]: conmon e2929352878e8c4afcea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope/container/memory.events
Oct  1 09:10:48 np0005464214 podman[98876]: 2025-10-01 13:10:48.354842548 +0000 UTC m=+0.829349552 container died e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-af0601b66e012301850648ec50d68764f5684f59cc926b12338750d7d00c6020-merged.mount: Deactivated successfully.
Oct  1 09:10:48 np0005464214 podman[98876]: 2025-10-01 13:10:48.406328218 +0000 UTC m=+0.880835252 container remove e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:48 np0005464214 systemd[1]: libpod-conmon-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope: Deactivated successfully.
Oct  1 09:10:48 np0005464214 podman[99087]: 2025-10-01 13:10:48.484820459 +0000 UTC m=+0.052191832 container create 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:10:48 np0005464214 systemd[1]: Started libpod-conmon-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope.
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct  1 09:10:48 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev c7a299eb-1fe2-40d1-b8f9-439c2ff29ac3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:10:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:48 np0005464214 podman[99087]: 2025-10-01 13:10:48.460998143 +0000 UTC m=+0.028369586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 podman[99087]: 2025-10-01 13:10:48.584488456 +0000 UTC m=+0.151859809 container init 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:10:48 np0005464214 podman[99087]: 2025-10-01 13:10:48.600780702 +0000 UTC m=+0.168152055 container start 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:10:48 np0005464214 podman[99087]: 2025-10-01 13:10:48.604194857 +0000 UTC m=+0.171566210 container attach 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:10:48 np0005464214 python3[99134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:48 np0005464214 podman[99135]: 2025-10-01 13:10:48.869268933 +0000 UTC m=+0.050955842 container create 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:10:48 np0005464214 systemd[1]: Started libpod-conmon-18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8.scope.
Oct  1 09:10:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098daa066444bfd697b7b7c3e457bf85516ded40b683eddeb282d90f4bbcd9bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098daa066444bfd697b7b7c3e457bf85516ded40b683eddeb282d90f4bbcd9bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:48 np0005464214 podman[99135]: 2025-10-01 13:10:48.852583475 +0000 UTC m=+0.034270404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:48 np0005464214 podman[99135]: 2025-10-01 13:10:48.963856256 +0000 UTC m=+0.145543185 container init 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:48 np0005464214 podman[99135]: 2025-10-01 13:10:48.976172631 +0000 UTC m=+0.157859570 container start 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:48 np0005464214 podman[99135]: 2025-10-01 13:10:48.98004795 +0000 UTC m=+0.161734859 container attach 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct  1 09:10:49 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev c537a440-1190-425e-99dc-5e76a685055c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856850079' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  1 09:10:49 np0005464214 vibrant_lamport[99152]: 
Oct  1 09:10:49 np0005464214 vibrant_lamport[99152]: {"epoch":1,"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","modified":"2025-10-01T13:07:55.363588Z","created":"2025-10-01T13:07:55.363588Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct  1 09:10:49 np0005464214 vibrant_lamport[99152]: dumped monmap epoch 1
Oct  1 09:10:49 np0005464214 systemd[1]: libpod-18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8.scope: Deactivated successfully.
Oct  1 09:10:49 np0005464214 mystifying_fermat[99104]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:10:49 np0005464214 mystifying_fermat[99104]: --> relative data size: 1.0
Oct  1 09:10:49 np0005464214 mystifying_fermat[99104]: --> All data devices are unavailable
Oct  1 09:10:49 np0005464214 podman[99202]: 2025-10-01 13:10:49.670666483 +0000 UTC m=+0.029579291 container died 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:49 np0005464214 systemd[1]: libpod-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope: Deactivated successfully.
Oct  1 09:10:49 np0005464214 systemd[1]: libpod-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope: Consumed 1.024s CPU time.
Oct  1 09:10:49 np0005464214 podman[99087]: 2025-10-01 13:10:49.680522574 +0000 UTC m=+1.247893937 container died 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:10:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-098daa066444bfd697b7b7c3e457bf85516ded40b683eddeb282d90f4bbcd9bd-merged.mount: Deactivated successfully.
Oct  1 09:10:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0-merged.mount: Deactivated successfully.
Oct  1 09:10:49 np0005464214 podman[99202]: 2025-10-01 13:10:49.731390353 +0000 UTC m=+0.090303141 container remove 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:10:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:10:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:49 np0005464214 systemd[1]: libpod-conmon-18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8.scope: Deactivated successfully.
Oct  1 09:10:49 np0005464214 podman[99087]: 2025-10-01 13:10:49.740542633 +0000 UTC m=+1.307913996 container remove 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  1 09:10:49 np0005464214 systemd[1]: libpod-conmon-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope: Deactivated successfully.
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.321907187 +0000 UTC m=+0.042966689 container create 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:10:50 np0005464214 python3[99378]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:50 np0005464214 systemd[1]: Started libpod-conmon-11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820.scope.
Oct  1 09:10:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.389157386 +0000 UTC m=+0.110216958 container init 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.302162937 +0000 UTC m=+0.023222499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.398681137 +0000 UTC m=+0.119740619 container start 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.403760192 +0000 UTC m=+0.124819694 container attach 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:10:50 np0005464214 recursing_archimedes[99413]: 167 167
Oct  1 09:10:50 np0005464214 systemd[1]: libpod-11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820.scope: Deactivated successfully.
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.405444764 +0000 UTC m=+0.126504266 container died 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:10:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-203512a7299b301e9e2b842d50b8ab95f0ee89b0dbe0766ad27e00d025c27a8b-merged.mount: Deactivated successfully.
Oct  1 09:10:50 np0005464214 podman[99412]: 2025-10-01 13:10:50.43389714 +0000 UTC m=+0.068509948 container create 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:50 np0005464214 podman[99395]: 2025-10-01 13:10:50.449096303 +0000 UTC m=+0.170155825 container remove 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:50 np0005464214 systemd[1]: libpod-conmon-11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820.scope: Deactivated successfully.
Oct  1 09:10:50 np0005464214 systemd[1]: Started libpod-conmon-6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a.scope.
Oct  1 09:10:50 np0005464214 podman[99412]: 2025-10-01 13:10:50.395230882 +0000 UTC m=+0.029843700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e7015175848c67fbe936c57b50a1ad40b556f855f894c34ec18829b6f8cf9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e7015175848c67fbe936c57b50a1ad40b556f855f894c34ec18829b6f8cf9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:50 np0005464214 podman[99412]: 2025-10-01 13:10:50.51102414 +0000 UTC m=+0.145636968 container init 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:10:50 np0005464214 podman[99412]: 2025-10-01 13:10:50.517381834 +0000 UTC m=+0.151994642 container start 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:10:50 np0005464214 podman[99412]: 2025-10-01 13:10:50.52052708 +0000 UTC m=+0.155139888 container attach 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct  1 09:10:50 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 1a08c9f5-e1a5-4905-b8dc-113644a0448d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:50 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=10.087410927s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active pruub 54.869903564s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:10:50 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=10.087410927s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown pruub 54.869903564s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:50 np0005464214 podman[99454]: 2025-10-01 13:10:50.621075734 +0000 UTC m=+0.053801080 container create 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:10:50 np0005464214 systemd[1]: Started libpod-conmon-3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4.scope.
Oct  1 09:10:50 np0005464214 podman[99454]: 2025-10-01 13:10:50.596017711 +0000 UTC m=+0.028743127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:50 np0005464214 podman[99454]: 2025-10-01 13:10:50.736007006 +0000 UTC m=+0.168732372 container init 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:50 np0005464214 podman[99454]: 2025-10-01 13:10:50.744236657 +0000 UTC m=+0.176961993 container start 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:50 np0005464214 podman[99454]: 2025-10-01 13:10:50.751685394 +0000 UTC m=+0.184410740 container attach 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2888623065' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  1 09:10:51 np0005464214 admiring_haibt[99445]: [client.openstack]
Oct  1 09:10:51 np0005464214 admiring_haibt[99445]: #011key = AQCSJ91oAAAAABAAnrq6Xzc1a2WsnMS+ZR1nnw==
Oct  1 09:10:51 np0005464214 admiring_haibt[99445]: #011caps mgr = "allow *"
Oct  1 09:10:51 np0005464214 admiring_haibt[99445]: #011caps mon = "profile rbd"
Oct  1 09:10:51 np0005464214 admiring_haibt[99445]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  1 09:10:51 np0005464214 systemd[1]: libpod-6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a.scope: Deactivated successfully.
Oct  1 09:10:51 np0005464214 podman[99412]: 2025-10-01 13:10:51.093641804 +0000 UTC m=+0.728254652 container died 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:10:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a07e7015175848c67fbe936c57b50a1ad40b556f855f894c34ec18829b6f8cf9-merged.mount: Deactivated successfully.
Oct  1 09:10:51 np0005464214 podman[99412]: 2025-10-01 13:10:51.154992403 +0000 UTC m=+0.789605241 container remove 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:10:51 np0005464214 systemd[1]: libpod-conmon-6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a.scope: Deactivated successfully.
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]: {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:    "0": [
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:        {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "devices": [
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "/dev/loop3"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            ],
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_name": "ceph_lv0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_size": "21470642176",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "name": "ceph_lv0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "tags": {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.crush_device_class": "",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.encrypted": "0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osd_id": "0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.type": "block",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.vdo": "0"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            },
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "type": "block",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "vg_name": "ceph_vg0"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:        }
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:    ],
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:    "1": [
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:        {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "devices": [
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "/dev/loop4"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            ],
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_name": "ceph_lv1",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_size": "21470642176",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "name": "ceph_lv1",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "tags": {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.crush_device_class": "",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.encrypted": "0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osd_id": "1",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.type": "block",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.vdo": "0"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            },
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "type": "block",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "vg_name": "ceph_vg1"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:        }
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:    ],
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:    "2": [
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:        {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "devices": [
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "/dev/loop5"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            ],
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_name": "ceph_lv2",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_size": "21470642176",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "name": "ceph_lv2",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "tags": {
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.cluster_name": "ceph",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.crush_device_class": "",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.encrypted": "0",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osd_id": "2",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.type": "block",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:                "ceph.vdo": "0"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            },
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "type": "block",
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:            "vg_name": "ceph_vg2"
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:        }
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]:    ]
Oct  1 09:10:51 np0005464214 affectionate_varahamihira[99471]: }
Oct  1 09:10:51 np0005464214 systemd[1]: libpod-3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4.scope: Deactivated successfully.
Oct  1 09:10:51 np0005464214 podman[99454]: 2025-10-01 13:10:51.521082988 +0000 UTC m=+0.953808424 container died 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7-merged.mount: Deactivated successfully.
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  1 09:10:51 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev f1d7fae5-9ea8-4012-b34f-a26114a1e0b5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 podman[99454]: 2025-10-01 13:10:51.60119233 +0000 UTC m=+1.033917706 container remove 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:51 np0005464214 systemd[1]: libpod-conmon-3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4.scope: Deactivated successfully.
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/2888623065' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  1 09:10:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:10:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  1 09:10:51 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33 pruub=14.442940712s) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active pruub 65.777542114s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33 pruub=14.442940712s) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown pruub 65.777542114s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.476836702 +0000 UTC m=+0.065560260 container create 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:10:52 np0005464214 systemd[1]: Started libpod-conmon-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope.
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.452332704 +0000 UTC m=+0.041056262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.575272861 +0000 UTC m=+0.163996419 container init 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.586274536 +0000 UTC m=+0.174998084 container start 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.590050601 +0000 UTC m=+0.178774149 container attach 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:52 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 7ff654ee-e209-44ce-afe0-0a75c7b339bf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:52 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=10.103324890s) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active pruub 56.894187927s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:10:52 np0005464214 cool_cohen[99757]: 167 167
Oct  1 09:10:52 np0005464214 systemd[1]: libpod-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope: Deactivated successfully.
Oct  1 09:10:52 np0005464214 conmon[99757]: conmon 5e711ca65357e43e04b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope/container/memory.events
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.597652723 +0000 UTC m=+0.186376261 container died 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:52 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=10.103324890s) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown pruub 56.894187927s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=33/35 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:10:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9498807502cc90d7619902696b3fb4adba902aaf4d1d0954f708e117b80e18c7-merged.mount: Deactivated successfully.
Oct  1 09:10:52 np0005464214 podman[99715]: 2025-10-01 13:10:52.667209923 +0000 UTC m=+0.255933491 container remove 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:10:52 np0005464214 systemd[1]: libpod-conmon-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope: Deactivated successfully.
Oct  1 09:10:52 np0005464214 ceph-mgr[75103]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct  1 09:10:52 np0005464214 podman[99857]: 2025-10-01 13:10:52.844910817 +0000 UTC m=+0.054225993 container create 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:10:52 np0005464214 systemd[1]: Started libpod-conmon-1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63.scope.
Oct  1 09:10:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:52 np0005464214 podman[99857]: 2025-10-01 13:10:52.819869584 +0000 UTC m=+0.029184790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:52 np0005464214 podman[99857]: 2025-10-01 13:10:52.920748558 +0000 UTC m=+0.130063754 container init 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:52 np0005464214 podman[99857]: 2025-10-01 13:10:52.930188156 +0000 UTC m=+0.139503312 container start 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:10:52 np0005464214 podman[99857]: 2025-10-01 13:10:52.940761548 +0000 UTC m=+0.150076704 container attach 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:52 np0005464214 ansible-async_wrapper.py[99859]: Invoked with j241018028218 30 /home/zuul/.ansible/tmp/ansible-tmp-1759324252.3191338-34018-32461601622966/AnsiballZ_command.py _
Oct  1 09:10:52 np0005464214 ansible-async_wrapper.py[99882]: Starting module and watcher
Oct  1 09:10:52 np0005464214 ansible-async_wrapper.py[99882]: Start watching 99883 (30)
Oct  1 09:10:52 np0005464214 ansible-async_wrapper.py[99883]: Start module (99883)
Oct  1 09:10:52 np0005464214 ansible-async_wrapper.py[99859]: Return async_wrapper task started.
Oct  1 09:10:53 np0005464214 python3[99884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=15.529828072s) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active pruub 72.803611755s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=15.529828072s) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown pruub 72.803611755s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.137686539 +0000 UTC m=+0.042187027 container create 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:10:53 np0005464214 systemd[1]: Started libpod-conmon-09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf.scope.
Oct  1 09:10:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff836bdadaf4942ca8a343538bd4f5c1003061498890f0f5cc1d509404c483bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff836bdadaf4942ca8a343538bd4f5c1003061498890f0f5cc1d509404c483bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.204092402 +0000 UTC m=+0.108592900 container init 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.209263309 +0000 UTC m=+0.113763807 container start 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.212837828 +0000 UTC m=+0.117338326 container attach 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.118161414 +0000 UTC m=+0.022661932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:53 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Oct  1 09:10:53 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 6a0a56e0-046b-4d78-8b2c-daaeb707fe2a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev c7a299eb-1fe2-40d1-b8f9-439c2ff29ac3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event c7a299eb-1fe2-40d1-b8f9-439c2ff29ac3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev c537a440-1190-425e-99dc-5e76a685055c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event c537a440-1190-425e-99dc-5e76a685055c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 1a08c9f5-e1a5-4905-b8dc-113644a0448d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 1a08c9f5-e1a5-4905-b8dc-113644a0448d (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev f1d7fae5-9ea8-4012-b34f-a26114a1e0b5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event f1d7fae5-9ea8-4012-b34f-a26114a1e0b5 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 7ff654ee-e209-44ce-afe0-0a75c7b339bf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 7ff654ee-e209-44ce-afe0-0a75c7b339bf (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 6a0a56e0-046b-4d78-8b2c-daaeb707fe2a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 6a0a56e0-046b-4d78-8b2c-daaeb707fe2a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=35/36 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 09:10:53 np0005464214 stoic_agnesi[99901]: 
Oct  1 09:10:53 np0005464214 stoic_agnesi[99901]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 09:10:53 np0005464214 systemd[1]: libpod-09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf.scope: Deactivated successfully.
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.788179559 +0000 UTC m=+0.692680087 container died 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]: {
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "osd_id": 0,
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "type": "bluestore"
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:    },
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "osd_id": 2,
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "type": "bluestore"
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:    },
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "osd_id": 1,
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:        "type": "bluestore"
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]:    }
Oct  1 09:10:53 np0005464214 vibrant_ptolemy[99875]: }
Oct  1 09:10:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ff836bdadaf4942ca8a343538bd4f5c1003061498890f0f5cc1d509404c483bc-merged.mount: Deactivated successfully.
Oct  1 09:10:53 np0005464214 systemd[76436]: Starting Mark boot as successful...
Oct  1 09:10:53 np0005464214 systemd[76436]: Finished Mark boot as successful.
Oct  1 09:10:53 np0005464214 podman[99885]: 2025-10-01 13:10:53.835140911 +0000 UTC m=+0.739641409 container remove 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:10:53 np0005464214 systemd[1]: libpod-1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63.scope: Deactivated successfully.
Oct  1 09:10:53 np0005464214 podman[99857]: 2025-10-01 13:10:53.840434092 +0000 UTC m=+1.049749258 container died 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:10:53 np0005464214 ansible-async_wrapper.py[99883]: Module complete (99883)
Oct  1 09:10:53 np0005464214 systemd[1]: libpod-conmon-09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf.scope: Deactivated successfully.
Oct  1 09:10:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038-merged.mount: Deactivated successfully.
Oct  1 09:10:53 np0005464214 podman[99857]: 2025-10-01 13:10:53.914265631 +0000 UTC m=+1.123580807 container remove 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:10:53 np0005464214 systemd[1]: libpod-conmon-1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63.scope: Deactivated successfully.
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 70c62511-9186-47e2-a676-587875b6c394 (Updating rgw.rgw deployment (+1 -> 1))
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.rmxmfa on compute-0
Oct  1 09:10:53 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.rmxmfa on compute-0
Oct  1 09:10:54 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct  1 09:10:54 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct  1 09:10:54 np0005464214 python3[100075]: ansible-ansible.legacy.async_status Invoked with jid=j241018028218.99859 mode=status _async_dir=/root/.ansible_async
Oct  1 09:10:54 np0005464214 python3[100191]: ansible-ansible.legacy.async_status Invoked with jid=j241018028218.99859 mode=cleanup _async_dir=/root/.ansible_async
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: Deploying daemon rgw.rgw.compute-0.rmxmfa on compute-0
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  1 09:10:54 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  1 09:10:54 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=11.088228226s) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 64.893386841s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:10:54 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=11.088228226s) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown pruub 64.893386841s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.670559817 +0000 UTC m=+0.050851961 container create 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:10:54 np0005464214 systemd[1]: Started libpod-conmon-9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e.scope.
Oct  1 09:10:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.651865898 +0000 UTC m=+0.032158082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.752405171 +0000 UTC m=+0.132697385 container init 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.75929503 +0000 UTC m=+0.139587174 container start 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.762707025 +0000 UTC m=+0.142999249 container attach 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:10:54 np0005464214 bold_chaplygin[100231]: 167 167
Oct  1 09:10:54 np0005464214 systemd[1]: libpod-9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e.scope: Deactivated successfully.
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.768928284 +0000 UTC m=+0.149220448 container died 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-aae3bbdc0271a070d1c983cde9df1970d877280ddc8f44892f59ef611e70c2f6-merged.mount: Deactivated successfully.
Oct  1 09:10:54 np0005464214 podman[100215]: 2025-10-01 13:10:54.809613854 +0000 UTC m=+0.189905988 container remove 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:54 np0005464214 systemd[1]: libpod-conmon-9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e.scope: Deactivated successfully.
Oct  1 09:10:54 np0005464214 systemd[1]: Reloading.
Oct  1 09:10:54 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:10:54 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:10:55 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct  1 09:10:55 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct  1 09:10:55 np0005464214 systemd[1]: Reloading.
Oct  1 09:10:55 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:10:55 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:10:55 np0005464214 systemd[1]: Starting Ceph rgw.rgw.compute-0.rmxmfa for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  1 09:10:55 np0005464214 python3[100353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=8.030547142s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 67.867408752s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=8.030547142s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 67.867408752s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 139 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:55 np0005464214 podman[100380]: 2025-10-01 13:10:55.768752211 +0000 UTC m=+0.077711649 container create d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:10:55 np0005464214 systemd[1]: Started libpod-conmon-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope.
Oct  1 09:10:55 np0005464214 podman[100411]: 2025-10-01 13:10:55.833407061 +0000 UTC m=+0.057039710 container create aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:55 np0005464214 podman[100380]: 2025-10-01 13:10:55.747941546 +0000 UTC m=+0.056900994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83b68196ee0d7e869f8b5f9080931bedeadea67db2e0df39db06dba9b2088d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83b68196ee0d7e869f8b5f9080931bedeadea67db2e0df39db06dba9b2088d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:55 np0005464214 podman[100380]: 2025-10-01 13:10:55.865461787 +0000 UTC m=+0.174421205 container init d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  1 09:10:55 np0005464214 podman[100380]: 2025-10-01 13:10:55.874791842 +0000 UTC m=+0.183751300 container start d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:55 np0005464214 podman[100380]: 2025-10-01 13:10:55.878589628 +0000 UTC m=+0.187549056 container attach d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:10:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.rmxmfa supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:55 np0005464214 podman[100411]: 2025-10-01 13:10:55.813513844 +0000 UTC m=+0.037146573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:55 np0005464214 podman[100411]: 2025-10-01 13:10:55.919228855 +0000 UTC m=+0.142861554 container init aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:55 np0005464214 podman[100411]: 2025-10-01 13:10:55.925342502 +0000 UTC m=+0.148975171 container start aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:55 np0005464214 bash[100411]: aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097
Oct  1 09:10:55 np0005464214 systemd[1]: Started Ceph rgw.rgw.compute-0.rmxmfa for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:55 np0005464214 radosgw[100440]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:10:55 np0005464214 radosgw[100440]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  1 09:10:55 np0005464214 radosgw[100440]: framework: beast
Oct  1 09:10:55 np0005464214 radosgw[100440]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  1 09:10:55 np0005464214 radosgw[100440]: init_numa not setting numa affinity
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 09:10:56 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 70c62511-9186-47e2-a676-587875b6c394 (Updating rgw.rgw deployment (+1 -> 1))
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 70c62511-9186-47e2-a676-587875b6c394 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  1 09:10:56 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 1e9d9abe-f1a4-4c88-8515-a120df66529c (Updating mds.cephfs deployment (+1 -> 1))
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.vhkcbm on compute-0
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.vhkcbm on compute-0
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  1 09:10:56 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 09:10:56 np0005464214 happy_aryabhata[100428]: 
Oct  1 09:10:56 np0005464214 happy_aryabhata[100428]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  1 09:10:56 np0005464214 systemd[1]: libpod-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope: Deactivated successfully.
Oct  1 09:10:56 np0005464214 conmon[100428]: conmon d7698c900df76c009868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope/container/memory.events
Oct  1 09:10:56 np0005464214 podman[100380]: 2025-10-01 13:10:56.455469866 +0000 UTC m=+0.764429304 container died d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b83b68196ee0d7e869f8b5f9080931bedeadea67db2e0df39db06dba9b2088d7-merged.mount: Deactivated successfully.
Oct  1 09:10:56 np0005464214 podman[100380]: 2025-10-01 13:10:56.504306154 +0000 UTC m=+0.813265582 container remove d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:10:56 np0005464214 systemd[1]: libpod-conmon-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope: Deactivated successfully.
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: Saving service rgw.rgw spec with placement compute-0
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: Deploying daemon mds.cephfs.compute-0.vhkcbm on compute-0
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct  1 09:10:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=37/39 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.723821603 +0000 UTC m=+0.050134799 container create 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:10:56 np0005464214 systemd[1]: Started libpod-conmon-8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5.scope.
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.700961326 +0000 UTC m=+0.027274532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:56 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.829387819 +0000 UTC m=+0.155701065 container init 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.839189609 +0000 UTC m=+0.165502785 container start 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:10:56 np0005464214 pensive_bhabha[100690]: 167 167
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.842947452 +0000 UTC m=+0.169260658 container attach 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:10:56 np0005464214 systemd[1]: libpod-8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5.scope: Deactivated successfully.
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.844005445 +0000 UTC m=+0.170318641 container died 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-69aa8480cd30e24b90eebd85b9b47518906d2ec958b1747f44e0df70bc605aec-merged.mount: Deactivated successfully.
Oct  1 09:10:56 np0005464214 podman[100674]: 2025-10-01 13:10:56.884380885 +0000 UTC m=+0.210694081 container remove 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:10:56 np0005464214 systemd[1]: libpod-conmon-8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5.scope: Deactivated successfully.
Oct  1 09:10:56 np0005464214 systemd[1]: Reloading.
Oct  1 09:10:57 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:10:57 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:10:57 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  1 09:10:57 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  1 09:10:57 np0005464214 systemd[1]: Reloading.
Oct  1 09:10:57 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:10:57 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:10:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:57 np0005464214 systemd[1]: Starting Ceph mds.cephfs.compute-0.vhkcbm for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  1 09:10:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 40 pg[8.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v96: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:57 np0005464214 python3[100813]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:57 np0005464214 ceph-mgr[75103]: [progress INFO root] Writing back 10 completed events
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:57 np0005464214 podman[100861]: 2025-10-01 13:10:57.832823396 +0000 UTC m=+0.048900702 container create 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:57 np0005464214 podman[100862]: 2025-10-01 13:10:57.841269503 +0000 UTC m=+0.052844322 container create 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  1 09:10:57 np0005464214 systemd[1]: Started libpod-conmon-5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78.scope.
Oct  1 09:10:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.vhkcbm supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff5a9978d32eed052368acfbcad1fa254e942b7ded155ae265358702217a4ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff5a9978d32eed052368acfbcad1fa254e942b7ded155ae265358702217a4ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:57 np0005464214 podman[100861]: 2025-10-01 13:10:57.811779714 +0000 UTC m=+0.027856990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:57 np0005464214 podman[100862]: 2025-10-01 13:10:57.813025702 +0000 UTC m=+0.024600541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:10:57 np0005464214 podman[100862]: 2025-10-01 13:10:57.909621595 +0000 UTC m=+0.121196414 container init 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:10:57 np0005464214 podman[100862]: 2025-10-01 13:10:57.915721952 +0000 UTC m=+0.127296771 container start 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:10:57 np0005464214 podman[100861]: 2025-10-01 13:10:57.91733008 +0000 UTC m=+0.133407406 container init 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:57 np0005464214 bash[100862]: 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283
Oct  1 09:10:57 np0005464214 podman[100861]: 2025-10-01 13:10:57.924025134 +0000 UTC m=+0.140102400 container start 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:10:57 np0005464214 systemd[1]: Started Ceph mds.cephfs.compute-0.vhkcbm for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct  1 09:10:57 np0005464214 podman[100861]: 2025-10-01 13:10:57.92780475 +0000 UTC m=+0.143882056 container attach 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:57 np0005464214 ansible-async_wrapper.py[99882]: Done in kid B.
Oct  1 09:10:57 np0005464214 ceph-mds[100898]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:10:57 np0005464214 ceph-mds[100898]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  1 09:10:57 np0005464214 ceph-mds[100898]: main not setting numa affinity
Oct  1 09:10:57 np0005464214 ceph-mds[100898]: pidfile_write: ignore empty --pid-file
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:57 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm[100893]: starting mds.cephfs.compute-0.vhkcbm at 
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:57 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 2 from mon.0
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:57 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 1e9d9abe-f1a4-4c88-8515-a120df66529c (Updating mds.cephfs deployment (+1 -> 1))
Oct  1 09:10:57 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 1e9d9abe-f1a4-4c88-8515-a120df66529c (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 09:10:58 np0005464214 elated_carver[100891]: 
Oct  1 09:10:58 np0005464214 elated_carver[100891]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  1 09:10:58 np0005464214 systemd[1]: libpod-5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78.scope: Deactivated successfully.
Oct  1 09:10:58 np0005464214 podman[100861]: 2025-10-01 13:10:58.464290637 +0000 UTC m=+0.680367933 container died 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fff5a9978d32eed052368acfbcad1fa254e942b7ded155ae265358702217a4ee-merged.mount: Deactivated successfully.
Oct  1 09:10:58 np0005464214 podman[100861]: 2025-10-01 13:10:58.511871127 +0000 UTC m=+0.727948413 container remove 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:10:58 np0005464214 systemd[1]: libpod-conmon-5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78.scope: Deactivated successfully.
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 new map
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T13:10:42.681473+0000#012modified#0112025-10-01T13:10:42.681508+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.vhkcbm{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] compat {c=[1],r=[1],i=[7ff]}]
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 3 from mon.0
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Monitors have assigned me to become a standby.
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] up:boot
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] as mds.0
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vhkcbm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.vhkcbm"} v 0) v1
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vhkcbm"}]: dispatch
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 all = 0
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e4 new map
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T13:10:42.681473+0000#012modified#0112025-10-01T13:10:58.977008+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.vhkcbm{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 4 from mon.0
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x1
Oct  1 09:10:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vhkcbm=up:creating}
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x100
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x600
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x601
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x602
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x603
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x604
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x605
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x606
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x607
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x608
Oct  1 09:10:58 np0005464214 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x609
Oct  1 09:10:59 np0005464214 ceph-mds[100898]: mds.0.4 creating_done
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vhkcbm is now active in filesystem cephfs as rank 0
Oct  1 09:10:59 np0005464214 podman[101172]: 2025-10-01 13:10:59.07545426 +0000 UTC m=+0.076311396 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:10:59 np0005464214 podman[101172]: 2025-10-01 13:10:59.193513688 +0000 UTC m=+0.194370844 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:10:59 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct  1 09:10:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:10:59 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct  1 09:10:59 np0005464214 python3[101297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: daemon mds.cephfs.compute-0.vhkcbm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: Cluster is now healthy
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: daemon mds.cephfs.compute-0.vhkcbm is now active in filesystem cephfs as rank 0
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  1 09:10:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 42 pg[9.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:10:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v99: 179 pgs: 2 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:10:59 np0005464214 podman[101328]: 2025-10-01 13:10:59.740906287 +0000 UTC m=+0.046401654 container create 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:10:59 np0005464214 systemd[1]: Started libpod-conmon-6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0.scope.
Oct  1 09:10:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:10:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c74d384e92641d8f36aac3fae5c96f84a7b01981c549d948911512f3a6145/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c74d384e92641d8f36aac3fae5c96f84a7b01981c549d948911512f3a6145/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:10:59 np0005464214 podman[101328]: 2025-10-01 13:10:59.72687002 +0000 UTC m=+0.032365417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:10:59 np0005464214 podman[101328]: 2025-10-01 13:10:59.825487125 +0000 UTC m=+0.130982582 container init 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:10:59 np0005464214 podman[101328]: 2025-10-01 13:10:59.835222981 +0000 UTC m=+0.140718368 container start 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:10:59 np0005464214 podman[101328]: 2025-10-01 13:10:59.838985526 +0000 UTC m=+0.144480943 container attach 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:10:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0d850c8e-f707-4904-b299-68e7cd43a264 does not exist
Oct  1 09:10:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fd509925-4ac6-40c2-9fc8-572e1c44b522 does not exist
Oct  1 09:10:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 05b89096-e735-4f4f-827b-5d766f5a7532 does not exist
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e5 new map
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-01T13:10:42.681473+0000#012modified#0112025-10-01T13:10:59.983402+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.vhkcbm{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] up:active
Oct  1 09:10:59 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vhkcbm=up:active}
Oct  1 09:10:59 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 5 from mon.0
Oct  1 09:10:59 np0005464214 ceph-mds[100898]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  1 09:10:59 np0005464214 ceph-mds[100898]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct  1 09:10:59 np0005464214 ceph-mds[100898]: mds.0.4 recovery_done -- successful recovery!
Oct  1 09:10:59 np0005464214 ceph-mds[100898]: mds.0.4 active_start
Oct  1 09:11:00 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  1 09:11:00 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  1 09:11:00 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  1 09:11:00 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  1 09:11:00 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  1 09:11:00 np0005464214 beautiful_shamir[101364]: 
Oct  1 09:11:00 np0005464214 beautiful_shamir[101364]: [{"container_id": "0abeef01559d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.45%", "created": "2025-10-01T13:09:28.103793Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-01T13:09:28.148660Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.931975Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-01T13:09:27.988610Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@crash.compute-0", "version": "18.2.7"}, {"container_id": "14f330f0450c", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.17%", "created": "2025-10-01T13:10:57.930405Z", "daemon_id": "cephfs.compute-0.vhkcbm", "daemon_name": "mds.cephfs.compute-0.vhkcbm", "daemon_type": "mds", "events": ["2025-10-01T13:10:57.980569Z daemon:mds.cephfs.compute-0.vhkcbm [INFO] \"Deployed mds.cephfs.compute-0.vhkcbm on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932602Z", "memory_usage": 13495173, "ports": [], "service_name": "mds.cephfs", "started": "2025-10-01T13:10:57.817630Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mds.cephfs.compute-0.vhkcbm", "version": "18.2.7"}, {"container_id": "d581f7f0a3e6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.95%", "created": "2025-10-01T13:08:02.686455Z", "daemon_id": "compute-0.puxjpb", "daemon_name": "mgr.compute-0.puxjpb", "daemon_type": "mgr", "events": ["2025-10-01T13:09:32.484128Z daemon:mgr.compute-0.puxjpb [INFO] \"Reconfigured mgr.compute-0.puxjpb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.931834Z", "memory_usage": 549768396, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-01T13:08:02.573938Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.puxjpb", "version": "18.2.7"}, {"container_id": "dfadbb96d7d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.11%", "created": "2025-10-01T13:07:57.222279Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-01T13:09:31.752707Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.931614Z", "memory_request": 2147483648, "memory_usage": 41450209, "ports": [], "service_name": "mon", "started": "2025-10-01T13:08:00.154422Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mon.compute-0", "version": "18.2.7"}, {"container_id": "ae2fd024bf44", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.44%", "created": "2025-10-01T13:09:54.436549Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-01T13:09:54.480189Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932104Z", "memory_request": 4294967296, "memory_usage": 59087257, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T13:09:54.329396Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@osd.0", "version": "18.2.7"}, {"container_id": "c7bfaf4b1718", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.71%", "created": "2025-10-01T13:09:59.354689Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-01T13:09:59.425766Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932229Z", "memory_request": 4294967296, "memory_usage": 61708697, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T13:09:59.164391Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@osd.1", "version": "18.2.7"}, {"container_id": "1866f3a29a4e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.78%", "created": "2025-10-01T13:10:04.318830Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-01T13:10:04.393416Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932351Z", "memory_request": 4294967296, "memory_usage": 60083404, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T13:10:04.117457Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@osd.2", "version": "18.2.7"}, {"container_id": "aad65a249f3d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.13%", "created": "2025-10-01T13:10:55.942344Z", "daemon_id": "rgw.compute-0.rmxmfa", "daemon_name": "rgw.rgw.compute-0.rmxmfa", "daemon_type": "rgw", "events": ["2025-10-01T
Oct  1 09:11:00 np0005464214 systemd[1]: libpod-6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0.scope: Deactivated successfully.
Oct  1 09:11:00 np0005464214 podman[101328]: 2025-10-01 13:11:00.40590729 +0000 UTC m=+0.711402657 container died 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:11:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-291c74d384e92641d8f36aac3fae5c96f84a7b01981c549d948911512f3a6145-merged.mount: Deactivated successfully.
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:00 np0005464214 podman[101328]: 2025-10-01 13:11:00.447836829 +0000 UTC m=+0.753332206 container remove 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:00 np0005464214 systemd[1]: libpod-conmon-6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0.scope: Deactivated successfully.
Oct  1 09:11:00 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  1 09:11:00 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  1 09:11:00 np0005464214 rsyslogd[1009]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "0abeef01559d", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.722601871 +0000 UTC m=+0.061295299 container create 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  1 09:11:00 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 43 pg[10.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [2] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  1 09:11:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  1 09:11:00 np0005464214 systemd[1]: Started libpod-conmon-87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013.scope.
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.694505304 +0000 UTC m=+0.033198782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.830482328 +0000 UTC m=+0.169175766 container init 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.841788953 +0000 UTC m=+0.180482381 container start 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.845556897 +0000 UTC m=+0.184250375 container attach 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:11:00 np0005464214 gracious_diffie[101576]: 167 167
Oct  1 09:11:00 np0005464214 systemd[1]: libpod-87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013.scope: Deactivated successfully.
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.848013683 +0000 UTC m=+0.186707111 container died 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:11:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a4da63151bf7b0579ea858ff99e4f0cbee7ba30e611b38f0f201f8096b8e1a4d-merged.mount: Deactivated successfully.
Oct  1 09:11:00 np0005464214 podman[101560]: 2025-10-01 13:11:00.896773648 +0000 UTC m=+0.235467076 container remove 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:00 np0005464214 systemd[1]: libpod-conmon-87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013.scope: Deactivated successfully.
Oct  1 09:11:01 np0005464214 podman[101598]: 2025-10-01 13:11:01.074597857 +0000 UTC m=+0.043967101 container create 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:11:01 np0005464214 systemd[1]: Started libpod-conmon-2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6.scope.
Oct  1 09:11:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 podman[101598]: 2025-10-01 13:11:01.056293199 +0000 UTC m=+0.025662483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:01 np0005464214 podman[101598]: 2025-10-01 13:11:01.156642086 +0000 UTC m=+0.126011371 container init 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:11:01 np0005464214 podman[101598]: 2025-10-01 13:11:01.167126417 +0000 UTC m=+0.136495671 container start 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:11:01 np0005464214 podman[101598]: 2025-10-01 13:11:01.170192189 +0000 UTC m=+0.139561443 container attach 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:11:01 np0005464214 python3[101644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:11:01 np0005464214 podman[101645]: 2025-10-01 13:11:01.518234755 +0000 UTC m=+0.053317756 container create 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:01 np0005464214 systemd[1]: Started libpod-conmon-6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb.scope.
Oct  1 09:11:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:01 np0005464214 podman[101645]: 2025-10-01 13:11:01.487157298 +0000 UTC m=+0.022240309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9e0d2e5d55ab934ed8f3f08d1fef841e90c9fc43ab471334fa7e887d00e396/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9e0d2e5d55ab934ed8f3f08d1fef841e90c9fc43ab471334fa7e887d00e396/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:01 np0005464214 podman[101645]: 2025-10-01 13:11:01.602556374 +0000 UTC m=+0.137639405 container init 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:01 np0005464214 podman[101645]: 2025-10-01 13:11:01.609512777 +0000 UTC m=+0.144595778 container start 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:01 np0005464214 podman[101645]: 2025-10-01 13:11:01.61293908 +0000 UTC m=+0.148022161 container attach 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:11:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  1 09:11:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v101: 180 pgs: 3 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:11:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  1 09:11:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  1 09:11:01 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  1 09:11:01 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  1 09:11:01 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 44 pg[10.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [2] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:01 np0005464214 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:11:02 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct  1 09:11:02 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct  1 09:11:02 np0005464214 infallible_ptolemy[101614]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:11:02 np0005464214 infallible_ptolemy[101614]: --> relative data size: 1.0
Oct  1 09:11:02 np0005464214 infallible_ptolemy[101614]: --> All data devices are unavailable
Oct  1 09:11:02 np0005464214 systemd[1]: libpod-2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6.scope: Deactivated successfully.
Oct  1 09:11:02 np0005464214 podman[101598]: 2025-10-01 13:11:02.148582383 +0000 UTC m=+1.117951627 container died 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003077922' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  1 09:11:02 np0005464214 exciting_ritchie[101660]: 
Oct  1 09:11:02 np0005464214 exciting_ritchie[101660]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":181,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1759324211,"num_in_osds":3,"osd_in_since":1759324184,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":177},{"state_name":"unknown","count":3}],"num_pgs":180,"num_pools":10,"num_objects":2,"data_bytes":459280,"bytes_used":84111360,"bytes_avail":64327815168,"bytes_total":64411926528,"unknown_pgs_ratio":0.01666666753590107},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.vhkcbm","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T13:09:49.717098+0000","services":{}},"progress_events":{"117aa2cc-6633-48b9-9615-b57d148f5b2d":{"message":"Global Recovery Event (5s)\n      [===========================.] ","progress":0.99438202381134033,"add_to_ceph_s":true}}}
Oct  1 09:11:02 np0005464214 systemd[1]: libpod-6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb.scope: Deactivated successfully.
Oct  1 09:11:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21-merged.mount: Deactivated successfully.
Oct  1 09:11:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  1 09:11:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  1 09:11:02 np0005464214 podman[101645]: 2025-10-01 13:11:02.530216472 +0000 UTC m=+1.065299513 container died 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:11:02 np0005464214 podman[101598]: 2025-10-01 13:11:02.529009894 +0000 UTC m=+1.498379178 container remove 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4d9e0d2e5d55ab934ed8f3f08d1fef841e90c9fc43ab471334fa7e887d00e396-merged.mount: Deactivated successfully.
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  1 09:11:02 np0005464214 podman[101645]: 2025-10-01 13:11:02.789847063 +0000 UTC m=+1.324930064 container remove 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  1 09:11:02 np0005464214 ceph-mgr[75103]: [progress INFO root] Writing back 11 completed events
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  1 09:11:02 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 45 pg[11.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  1 09:11:02 np0005464214 systemd[1]: libpod-conmon-2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6.scope: Deactivated successfully.
Oct  1 09:11:02 np0005464214 systemd[1]: libpod-conmon-6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb.scope: Deactivated successfully.
Oct  1 09:11:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.153626988 +0000 UTC m=+0.049182250 container create 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:11:03 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Oct  1 09:11:03 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Oct  1 09:11:03 np0005464214 systemd[1]: Started libpod-conmon-1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d.scope.
Oct  1 09:11:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.132855464 +0000 UTC m=+0.028410756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.233875702 +0000 UTC m=+0.129430964 container init 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.241949218 +0000 UTC m=+0.137504510 container start 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.245538868 +0000 UTC m=+0.141094160 container attach 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:11:03 np0005464214 recursing_ishizaka[101905]: 167 167
Oct  1 09:11:03 np0005464214 systemd[1]: libpod-1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d.scope: Deactivated successfully.
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.248372914 +0000 UTC m=+0.143928176 container died 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cafe0e75f85f03ce1ac0202569a0b14dce26ffd9a8a030472023fd6a226ca3df-merged.mount: Deactivated successfully.
Oct  1 09:11:03 np0005464214 podman[101889]: 2025-10-01 13:11:03.300549935 +0000 UTC m=+0.196105227 container remove 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:03 np0005464214 systemd[1]: libpod-conmon-1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d.scope: Deactivated successfully.
Oct  1 09:11:03 np0005464214 podman[101929]: 2025-10-01 13:11:03.455818316 +0000 UTC m=+0.050279623 container create 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:11:03 np0005464214 systemd[1]: Started libpod-conmon-3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d.scope.
Oct  1 09:11:03 np0005464214 podman[101929]: 2025-10-01 13:11:03.435104914 +0000 UTC m=+0.029566251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:03 np0005464214 podman[101929]: 2025-10-01 13:11:03.565149977 +0000 UTC m=+0.159611284 container init 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:11:03 np0005464214 podman[101929]: 2025-10-01 13:11:03.572531172 +0000 UTC m=+0.166992469 container start 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:03 np0005464214 podman[101929]: 2025-10-01 13:11:03.575929126 +0000 UTC m=+0.170390423 container attach 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v104: 181 pgs: 1 unknown, 180 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1015 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  1 09:11:03 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 46 pg[11.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  1 09:11:03 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  1 09:11:03 np0005464214 python3[101975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:11:03 np0005464214 podman[101976]: 2025-10-01 13:11:03.976049928 +0000 UTC m=+0.063711582 container create f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:04 np0005464214 systemd[1]: Started libpod-conmon-f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb.scope.
Oct  1 09:11:04 np0005464214 podman[101976]: 2025-10-01 13:11:03.950104668 +0000 UTC m=+0.037766392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:11:04 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257014eac995d7b0182c85518d66914a3ee31e428bf9a45902ad6c81942cca4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257014eac995d7b0182c85518d66914a3ee31e428bf9a45902ad6c81942cca4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:04 np0005464214 podman[101976]: 2025-10-01 13:11:04.079010075 +0000 UTC m=+0.166671729 container init f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:04 np0005464214 podman[101976]: 2025-10-01 13:11:04.090060002 +0000 UTC m=+0.177721676 container start f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:04 np0005464214 podman[101976]: 2025-10-01 13:11:04.094992763 +0000 UTC m=+0.182654447 container attach f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:11:04 np0005464214 objective_napier[101945]: {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:    "0": [
Oct  1 09:11:04 np0005464214 objective_napier[101945]:        {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "devices": [
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "/dev/loop3"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            ],
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_name": "ceph_lv0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_size": "21470642176",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "name": "ceph_lv0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "tags": {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cluster_name": "ceph",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.crush_device_class": "",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.encrypted": "0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osd_id": "0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.type": "block",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.vdo": "0"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            },
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "type": "block",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "vg_name": "ceph_vg0"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:        }
Oct  1 09:11:04 np0005464214 objective_napier[101945]:    ],
Oct  1 09:11:04 np0005464214 objective_napier[101945]:    "1": [
Oct  1 09:11:04 np0005464214 objective_napier[101945]:        {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "devices": [
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "/dev/loop4"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            ],
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_name": "ceph_lv1",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_size": "21470642176",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "name": "ceph_lv1",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "tags": {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cluster_name": "ceph",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.crush_device_class": "",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.encrypted": "0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osd_id": "1",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.type": "block",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.vdo": "0"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            },
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "type": "block",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "vg_name": "ceph_vg1"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:        }
Oct  1 09:11:04 np0005464214 objective_napier[101945]:    ],
Oct  1 09:11:04 np0005464214 objective_napier[101945]:    "2": [
Oct  1 09:11:04 np0005464214 objective_napier[101945]:        {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "devices": [
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "/dev/loop5"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            ],
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_name": "ceph_lv2",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_size": "21470642176",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "name": "ceph_lv2",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "tags": {
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.cluster_name": "ceph",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.crush_device_class": "",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.encrypted": "0",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osd_id": "2",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.type": "block",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:                "ceph.vdo": "0"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            },
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "type": "block",
Oct  1 09:11:04 np0005464214 objective_napier[101945]:            "vg_name": "ceph_vg2"
Oct  1 09:11:04 np0005464214 objective_napier[101945]:        }
Oct  1 09:11:04 np0005464214 objective_napier[101945]:    ]
Oct  1 09:11:04 np0005464214 objective_napier[101945]: }
Oct  1 09:11:04 np0005464214 systemd[1]: libpod-3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d.scope: Deactivated successfully.
Oct  1 09:11:04 np0005464214 podman[101929]: 2025-10-01 13:11:04.413947731 +0000 UTC m=+1.008409048 container died 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:11:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891-merged.mount: Deactivated successfully.
Oct  1 09:11:04 np0005464214 podman[101929]: 2025-10-01 13:11:04.466670308 +0000 UTC m=+1.061131605 container remove 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:04 np0005464214 systemd[1]: libpod-conmon-3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d.scope: Deactivated successfully.
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762836433' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  1 09:11:04 np0005464214 happy_mahavira[101991]: 
Oct  1 09:11:04 np0005464214 systemd[1]: libpod-f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb.scope: Deactivated successfully.
Oct  1 09:11:04 np0005464214 happy_mahavira[101991]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.rmxmfa","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  1 09:11:04 np0005464214 podman[101976]: 2025-10-01 13:11:04.722227385 +0000 UTC m=+0.809889009 container died f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4257014eac995d7b0182c85518d66914a3ee31e428bf9a45902ad6c81942cca4-merged.mount: Deactivated successfully.
Oct  1 09:11:04 np0005464214 podman[101976]: 2025-10-01 13:11:04.763208924 +0000 UTC m=+0.850870558 container remove f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:11:04 np0005464214 systemd[1]: libpod-conmon-f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb.scope: Deactivated successfully.
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  1 09:11:04 np0005464214 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  1 09:11:04 np0005464214 radosgw[100440]: LDAP not started since no server URIs were provided in the configuration.
Oct  1 09:11:04 np0005464214 radosgw[100440]: framework: beast
Oct  1 09:11:04 np0005464214 radosgw[100440]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  1 09:11:04 np0005464214 radosgw[100440]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  1 09:11:04 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa[100435]: 2025-10-01T13:11:04.943+0000 7f62d0374940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  1 09:11:04 np0005464214 radosgw[100440]: starting handler: beast
Oct  1 09:11:04 np0005464214 radosgw[100440]: set uid:gid to 167:167 (ceph:ceph)
Oct  1 09:11:05 np0005464214 radosgw[100440]: mgrc service_daemon_register rgw.14271 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.rmxmfa,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f267d240-c1e7-4ec2-8e2a-64c5ae3c7ead,zone_name=default,zonegroup_id=852c69ab-29aa-4b27-9f2a-563f30a89237,zonegroup_name=default}
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.075952543 +0000 UTC m=+0.037910686 container create 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:11:05 np0005464214 systemd[1]: Started libpod-conmon-8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3.scope.
Oct  1 09:11:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.060583045 +0000 UTC m=+0.022541218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.159034975 +0000 UTC m=+0.120993138 container init 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.166295626 +0000 UTC m=+0.128253769 container start 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:11:05 np0005464214 clever_jones[102743]: 167 167
Oct  1 09:11:05 np0005464214 systemd[1]: libpod-8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3.scope: Deactivated successfully.
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.173407334 +0000 UTC m=+0.135365497 container attach 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.17462945 +0000 UTC m=+0.136587603 container died 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 09:11:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8ec14fdfa957104fe55e05cfdc5c083bb67db4f014972e12c89c6a60c2f1a6d5-merged.mount: Deactivated successfully.
Oct  1 09:11:05 np0005464214 podman[102727]: 2025-10-01 13:11:05.225094648 +0000 UTC m=+0.187052791 container remove 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:05 np0005464214 systemd[1]: libpod-conmon-8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3.scope: Deactivated successfully.
Oct  1 09:11:05 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  1 09:11:05 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  1 09:11:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:05 np0005464214 podman[102768]: 2025-10-01 13:11:05.443040979 +0000 UTC m=+0.061186785 container create 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:11:05 np0005464214 systemd[1]: Started libpod-conmon-8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f.scope.
Oct  1 09:11:05 np0005464214 podman[102768]: 2025-10-01 13:11:05.41288236 +0000 UTC m=+0.031028176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:05 np0005464214 podman[102768]: 2025-10-01 13:11:05.565155861 +0000 UTC m=+0.183301667 container init 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:05 np0005464214 podman[102768]: 2025-10-01 13:11:05.579391304 +0000 UTC m=+0.197537070 container start 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:05 np0005464214 podman[102768]: 2025-10-01 13:11:05.582805728 +0000 UTC m=+0.200951534 container attach 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:11:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v107: 181 pgs: 1 unknown, 180 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct  1 09:11:05 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 09:11:05 np0005464214 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  1 09:11:05 np0005464214 ceph-mon[74802]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  1 09:11:05 np0005464214 ceph-mon[74802]: Cluster is now healthy
Oct  1 09:11:05 np0005464214 python3[102815]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:11:05 np0005464214 podman[102816]: 2025-10-01 13:11:05.974825423 +0000 UTC m=+0.040760542 container create dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:11:06 np0005464214 systemd[1]: Started libpod-conmon-dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317.scope.
Oct  1 09:11:06 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c390a144cc64b45983da30673f817bca0fba55a1058334eacff12ed91c839e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c390a144cc64b45983da30673f817bca0fba55a1058334eacff12ed91c839e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:06 np0005464214 podman[102816]: 2025-10-01 13:11:05.959978991 +0000 UTC m=+0.025914140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:11:06 np0005464214 podman[102816]: 2025-10-01 13:11:06.059185374 +0000 UTC m=+0.125120523 container init dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:06 np0005464214 podman[102816]: 2025-10-01 13:11:06.065817546 +0000 UTC m=+0.131752665 container start dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:11:06 np0005464214 podman[102816]: 2025-10-01 13:11:06.069088666 +0000 UTC m=+0.135023785 container attach dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:11:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct  1 09:11:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct  1 09:11:06 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Oct  1 09:11:06 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]: {
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "osd_id": 0,
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "type": "bluestore"
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:    },
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "osd_id": 2,
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "type": "bluestore"
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:    },
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "osd_id": 1,
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:        "type": "bluestore"
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]:    }
Oct  1 09:11:06 np0005464214 gallant_fermi[102785]: }
Oct  1 09:11:06 np0005464214 systemd[1]: libpod-8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f.scope: Deactivated successfully.
Oct  1 09:11:06 np0005464214 podman[102768]: 2025-10-01 13:11:06.567918535 +0000 UTC m=+1.186064301 container died 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct  1 09:11:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128680161' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  1 09:11:06 np0005464214 youthful_albattani[102832]: mimic
Oct  1 09:11:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def-merged.mount: Deactivated successfully.
Oct  1 09:11:06 np0005464214 systemd[1]: libpod-dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317.scope: Deactivated successfully.
Oct  1 09:11:06 np0005464214 podman[102816]: 2025-10-01 13:11:06.626010966 +0000 UTC m=+0.691946105 container died dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:11:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5c390a144cc64b45983da30673f817bca0fba55a1058334eacff12ed91c839e2-merged.mount: Deactivated successfully.
Oct  1 09:11:06 np0005464214 podman[102816]: 2025-10-01 13:11:06.807687402 +0000 UTC m=+0.873622531 container remove dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:11:06 np0005464214 systemd[1]: libpod-conmon-dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317.scope: Deactivated successfully.
Oct  1 09:11:06 np0005464214 podman[102768]: 2025-10-01 13:11:06.834084956 +0000 UTC m=+1.452230762 container remove 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:11:06 np0005464214 systemd[1]: libpod-conmon-8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f.scope: Deactivated successfully.
Oct  1 09:11:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:11:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:11:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c343aa81-eb55-47fd-bdde-694cc5d55585 does not exist
Oct  1 09:11:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7ff1afc3-160f-4931-a1a3-30de39992e3a does not exist
Oct  1 09:11:07 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  1 09:11:07 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  1 09:11:07 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct  1 09:11:07 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct  1 09:11:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v108: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 193 op/s
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 117aa2cc-6633-48b9-9615-b57d148f5b2d (Global Recovery Event) in 15 seconds
Oct  1 09:11:07 np0005464214 python3[103130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  1 09:11:07 np0005464214 podman[103158]: 2025-10-01 13:11:07.981791638 +0000 UTC m=+0.074891323 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 09:11:07 np0005464214 podman[103165]: 2025-10-01 13:11:07.982046906 +0000 UTC m=+0.054163361 container create 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:11:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611283302s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762802124s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611252785s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762802124s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611231804s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762802124s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693221092s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.844810486s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693167686s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.844810486s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611141205s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762802124s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610872269s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762786865s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610814095s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762786865s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693246841s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845291138s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610735893s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762786865s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610707283s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762786865s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693194389s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845291138s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.691101074s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.844795227s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.691054344s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.844795227s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607466698s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762596130s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607438087s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762596130s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689908981s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845138550s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689885139s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845138550s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606488228s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761795044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606466293s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761779785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689832687s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845146179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606427193s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761779785s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689791679s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845146179s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689792633s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845191956s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689774513s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845191956s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607088089s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762573242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689687729s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845214844s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606253624s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761779785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689668655s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845214844s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606224060s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761779785s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607014656s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762573242s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689692497s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845367432s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689534187s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845207214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689671516s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845367432s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689507484s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845207214s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606098175s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761795044s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689553261s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845306396s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689526558s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845306396s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605860710s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761695862s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605841637s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761695862s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689403534s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845283508s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689414024s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845375061s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689373016s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845283508s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605442047s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761413574s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605194092s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761169434s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605110168s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761123657s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605164528s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761169434s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605414391s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761413574s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605075836s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761123657s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689383507s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845375061s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604965210s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761116028s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689256668s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845428467s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689154625s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845352173s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604942322s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761116028s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604895592s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761108398s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689238548s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845428467s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689103127s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845352173s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689115524s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845420837s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604805946s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761108398s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604825020s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761154175s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604804993s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761154175s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688936234s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845443726s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688903809s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845436096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603878021s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760383606s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688879013s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845443726s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603754044s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760383606s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688836098s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845436096s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688723564s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845504761s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688652992s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845504761s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688632011s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845504761s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603507042s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760368347s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688696861s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845504761s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603458405s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760368347s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688436508s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845420837s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.596887589s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.754013062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603412628s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760566711s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.596863747s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.754013062s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688902855s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.846031189s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603388786s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760566711s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688844681s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.846031189s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.602959633s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760559082s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.602933884s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760559082s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.589295387s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.800079346s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.589271545s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.800079346s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590354919s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801208496s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590324402s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801208496s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590507507s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801475525s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590487480s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801475525s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.584633827s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795669556s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.584611893s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795669556s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583739281s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795593262s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583705902s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795593262s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583567619s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795600891s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583550453s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795600891s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589060783s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801193237s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589043617s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801193237s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583275795s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795547485s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583257675s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795547485s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588729858s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801284790s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588774681s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801338196s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588702202s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801284790s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588756561s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801338196s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589044571s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801727295s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582899094s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795539856s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589032173s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801727295s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582836151s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795539856s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588561058s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801330566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588545799s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801330566s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588485718s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801338196s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588474274s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801338196s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582536697s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795402527s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582480431s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795402527s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582041740s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795143127s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588356018s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801506042s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582018852s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795143127s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588335037s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801506042s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582125664s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795425415s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582107544s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795425415s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588095665s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801467896s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581447601s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794876099s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581975937s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795379639s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582921982s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795661926s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581433296s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794876099s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581923485s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795379639s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587980270s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801498413s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587966919s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801498413s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581234932s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794807434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581216812s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794807434s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588148117s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801757812s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588127136s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801757812s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581123352s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794807434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581105232s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794807434s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587775230s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801521301s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580997467s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794784546s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587757111s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801521301s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580979347s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794784546s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587673187s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801551819s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580879211s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794776917s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587644577s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801551819s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580860138s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794776917s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587603569s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801628113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588075638s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801467896s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587579727s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801628113s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580642700s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794715881s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580621719s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794715881s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587475777s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801628113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587457657s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801628113s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580371857s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794563293s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580352783s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794563293s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580206871s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794502258s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580180168s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794502258s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587282181s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801635742s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580199242s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794555664s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587265968s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801635742s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580146790s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794555664s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579926491s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794464111s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581134796s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795700073s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579904556s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794464111s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581110954s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795700073s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587009430s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801696777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586990356s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801696777s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586926460s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801666260s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586906433s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801666260s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579672813s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794448853s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586896896s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801696777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579648972s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794448853s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586878777s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801696777s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587798119s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753082275s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587710381s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753082275s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587624550s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753120422s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587596893s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753120422s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587339401s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753013611s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587310791s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753013611s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587023735s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752922058s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586956978s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752922058s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586861610s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752922058s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586841583s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752967834s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586811066s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752967834s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586739540s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752922058s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693970680s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 41'2 active pruub 84.860397339s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586507797s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752914429s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693925858s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.860397339s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580777168s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795661926s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=37/39 n=3 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693649292s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'4 lcod 41'4 mlcod 41'4 active pruub 84.860374451s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586191177s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752914429s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=37/39 n=3 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693579674s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'4 lcod 41'4 mlcod 0'0 unknown NOTIFY pruub 84.860374451s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586101532s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752929688s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585674286s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752548218s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585650444s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752548218s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585943222s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752929688s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585941315s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753005981s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688312531s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 41'1 active pruub 84.855400085s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688279152s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 0'0 unknown NOTIFY pruub 84.855400085s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688035965s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 84.855316162s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688012123s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.855316162s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585420609s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752792358s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585400581s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752792358s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687851906s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'1 lcod 41'2 mlcod 41'2 active pruub 84.855308533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585071564s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752540588s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687819481s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'1 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.855308533s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585039139s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752540588s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584961891s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752487183s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584946632s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752487183s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584838867s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752479553s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584595680s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752265930s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584817886s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752479553s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584566116s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752265930s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584703445s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752494812s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584680557s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752494812s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687451363s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 41'1 active pruub 84.855308533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585902214s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753005981s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584606171s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752555847s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584563255s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752555847s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686568260s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 41'2 active pruub 84.854682922s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686532974s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.854682922s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686651230s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 84.854827881s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687254906s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 0'0 unknown NOTIFY pruub 84.855308533s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686617851s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.854827881s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.583729744s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752014160s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.583708763s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752014160s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584540367s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753028870s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584517479s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753028870s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.581744194s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752479553s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.581666946s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752479553s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 systemd[1]: Started libpod-conmon-2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6.scope.
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 podman[103165]: 2025-10-01 13:11:07.958167949 +0000 UTC m=+0.030284454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:11:08 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e42b0cb15986298fc2efdccc361ad181d9aa2104ce99e545cd050011c1474/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e42b0cb15986298fc2efdccc361ad181d9aa2104ce99e545cd050011c1474/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:08 np0005464214 podman[103165]: 2025-10-01 13:11:08.103870429 +0000 UTC m=+0.175986944 container init 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:11:08 np0005464214 podman[103158]: 2025-10-01 13:11:08.109001915 +0000 UTC m=+0.202101580 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:11:08 np0005464214 podman[103165]: 2025-10-01 13:11:08.115411291 +0000 UTC m=+0.187527756 container start 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:11:08 np0005464214 podman[103165]: 2025-10-01 13:11:08.15410934 +0000 UTC m=+0.226225815 container attach 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  1 09:11:08 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790165366' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  1 09:11:08 np0005464214 confident_pascal[103197]: 
Oct  1 09:11:08 np0005464214 confident_pascal[103197]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Oct  1 09:11:08 np0005464214 systemd[1]: libpod-2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6.scope: Deactivated successfully.
Oct  1 09:11:08 np0005464214 podman[103165]: 2025-10-01 13:11:08.747402858 +0000 UTC m=+0.819519343 container died 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:11:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4d6e42b0cb15986298fc2efdccc361ad181d9aa2104ce99e545cd050011c1474-merged.mount: Deactivated successfully.
Oct  1 09:11:08 np0005464214 podman[103165]: 2025-10-01 13:11:08.799836986 +0000 UTC m=+0.871953441 container remove 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:11:08 np0005464214 systemd[1]: libpod-conmon-2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6.scope: Deactivated successfully.
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6ac94d59-47fe-471d-a999-71b2be2e2628 does not exist
Oct  1 09:11:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0c3ac7b1-2121-41af-a09e-c26566b81f1e does not exist
Oct  1 09:11:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fd15f104-cc64-4c7a-a0d8-e6b4e19c6203 does not exist
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  1 09:11:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  1 09:11:09 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.11( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.1f( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.1b( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.13( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1c( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.14( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.16( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.15( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.12( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.17( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.15( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.8( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.13( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.9( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.b( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.f( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.3( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.a( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.6( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.3( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.1f( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.5( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.2( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.3( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.f( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.4( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.6( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.2( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.18( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.1d( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.4( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.7( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.f( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.1b( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.1f( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.1e( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.18( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.19( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.c( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.1c( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.18( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.1c( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.11( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.16( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.13( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.e( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.9( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.11( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.18( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.11( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.15( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.a( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.1( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.e( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.8( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.a( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.5( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.2( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.5( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.1( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.7( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.e( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.c( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.1d( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1a( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1b( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.1a( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.1e( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.8( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.d( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.d( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.f( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.f( v 41'5 lc 41'1 (0'0,41'5] local-lis/les=48/49 n=3 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.1( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.12( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.14( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.1b( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.2( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.10( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.11( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.17( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.13( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.12( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.15( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.16( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.9( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.1d( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.8( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.d( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.9( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.a( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.5( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.5( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.7( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.5( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.3( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.b( v 41'3 lc 0'0 (0'0,41'3] local-lis/les=48/49 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.4( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.4( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.7( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.1( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.f( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.c( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.1a( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.18( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.19( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.3( v 41'2 lc 0'0 (0'0,41'2] local-lis/les=48/49 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.6( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.9( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.7( v 41'2 lc 41'1 (0'0,41'2] local-lis/les=48/49 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.652782996 +0000 UTC m=+0.038612508 container create 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:09 np0005464214 systemd[1]: Started libpod-conmon-208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604.scope.
Oct  1 09:11:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.635253852 +0000 UTC m=+0.021083384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v111: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.7 KiB/s wr, 186 op/s
Oct  1 09:11:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  1 09:11:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.743452598 +0000 UTC m=+0.129282150 container init 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.755735113 +0000 UTC m=+0.141564625 container start 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.758704913 +0000 UTC m=+0.144534465 container attach 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:11:09 np0005464214 thirsty_chebyshev[103521]: 167 167
Oct  1 09:11:09 np0005464214 systemd[1]: libpod-208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604.scope: Deactivated successfully.
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.764526271 +0000 UTC m=+0.150355823 container died 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fe1fde2163a541d2752d350ea9e3afe895cb0b157a65f7c8881cf9c929ac5b42-merged.mount: Deactivated successfully.
Oct  1 09:11:09 np0005464214 podman[103505]: 2025-10-01 13:11:09.805846559 +0000 UTC m=+0.191676071 container remove 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:09 np0005464214 systemd[1]: libpod-conmon-208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604.scope: Deactivated successfully.
Oct  1 09:11:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:11:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  1 09:11:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  1 09:11:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  1 09:11:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  1 09:11:10 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.e( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700785637s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=41'2 lcod 41'2 mlcod 41'2 active pruub 84.860054016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700929642s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active pruub 84.860275269s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.e( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700670242s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=41'2 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.860054016s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700844765s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.860275269s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700228691s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 lcod 0'0 mlcod 0'0 active pruub 84.860404968s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.6( v 45'1 (0'0,45'1] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.694858551s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=45'1 lcod 0'0 mlcod 0'0 active pruub 84.855392456s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.6( v 45'1 (0'0,45'1] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.694758415s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=45'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.855392456s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:10 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700108528s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.860404968s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:10 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:10 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:10 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:10 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:10 np0005464214 podman[103544]: 2025-10-01 13:11:10.030883346 +0000 UTC m=+0.069202940 container create 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:11:10 np0005464214 systemd[1]: Started libpod-conmon-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope.
Oct  1 09:11:10 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct  1 09:11:10 np0005464214 podman[103544]: 2025-10-01 13:11:10.005680738 +0000 UTC m=+0.044000422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:10 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct  1 09:11:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:10 np0005464214 podman[103544]: 2025-10-01 13:11:10.146637553 +0000 UTC m=+0.184957167 container init 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:11:10 np0005464214 podman[103544]: 2025-10-01 13:11:10.164633372 +0000 UTC m=+0.202952966 container start 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:10 np0005464214 podman[103544]: 2025-10-01 13:11:10.168525 +0000 UTC m=+0.206844624 container attach 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  1 09:11:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  1 09:11:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  1 09:11:11 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  1 09:11:11 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.2( empty local-lis/les=50/51 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:11 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=50/51 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=41'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:11 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.6( v 45'1 lc 0'0 (0'0,45'1] local-lis/les=50/51 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=45'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:11 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.e( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=50/51 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:11 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Oct  1 09:11:11 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Oct  1 09:11:11 np0005464214 pensive_noether[103562]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:11:11 np0005464214 pensive_noether[103562]: --> relative data size: 1.0
Oct  1 09:11:11 np0005464214 pensive_noether[103562]: --> All data devices are unavailable
Oct  1 09:11:11 np0005464214 systemd[1]: libpod-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope: Deactivated successfully.
Oct  1 09:11:11 np0005464214 systemd[1]: libpod-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope: Consumed 1.064s CPU time.
Oct  1 09:11:11 np0005464214 podman[103544]: 2025-10-01 13:11:11.294990764 +0000 UTC m=+1.333310398 container died 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:11:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251-merged.mount: Deactivated successfully.
Oct  1 09:11:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  1 09:11:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  1 09:11:11 np0005464214 podman[103544]: 2025-10-01 13:11:11.371885757 +0000 UTC m=+1.410205371 container remove 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:11 np0005464214 systemd[1]: libpod-conmon-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope: Deactivated successfully.
Oct  1 09:11:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v114: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:11:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  1 09:11:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.915916443s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 41'2 active pruub 84.197708130s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.906821251s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'3 mlcod 41'3 active pruub 84.188560486s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.915840149s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 0'0 unknown NOTIFY pruub 84.197708130s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.906641006s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 84.188560486s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.908492088s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 41'2 active pruub 84.190483093s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.908445358s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 0'0 unknown NOTIFY pruub 84.190483093s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=48/49 n=3 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.904949188s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'5 mlcod 41'5 active pruub 84.187118530s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:12 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=48/49 n=3 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.904909134s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'5 mlcod 0'0 unknown NOTIFY pruub 84.187118530s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:12 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:12 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:12 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:12 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.17305461 +0000 UTC m=+0.040741652 container create a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:11:12 np0005464214 systemd[1]: Started libpod-conmon-a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188.scope.
Oct  1 09:11:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.155199586 +0000 UTC m=+0.022886628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.261844596 +0000 UTC m=+0.129531648 container init a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.274259444 +0000 UTC m=+0.141946466 container start a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.278004048 +0000 UTC m=+0.145691090 container attach a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:11:12 np0005464214 admiring_hodgkin[103759]: 167 167
Oct  1 09:11:12 np0005464214 systemd[1]: libpod-a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188.scope: Deactivated successfully.
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.281965618 +0000 UTC m=+0.149652660 container died a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:11:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a90215f681a310f88ea1a9ea1cebb5f60ad0f4e9324217cbedc670cebaa20ae1-merged.mount: Deactivated successfully.
Oct  1 09:11:12 np0005464214 podman[103743]: 2025-10-01 13:11:12.325135354 +0000 UTC m=+0.192822386 container remove a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:12 np0005464214 systemd[1]: libpod-conmon-a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188.scope: Deactivated successfully.
Oct  1 09:11:12 np0005464214 podman[103784]: 2025-10-01 13:11:12.496824066 +0000 UTC m=+0.065263490 container create 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:11:12 np0005464214 systemd[1]: Started libpod-conmon-93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1.scope.
Oct  1 09:11:12 np0005464214 podman[103784]: 2025-10-01 13:11:12.474023181 +0000 UTC m=+0.042462595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:12 np0005464214 podman[103784]: 2025-10-01 13:11:12.61346886 +0000 UTC m=+0.181908284 container init 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:11:12 np0005464214 podman[103784]: 2025-10-01 13:11:12.624469525 +0000 UTC m=+0.192908929 container start 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:11:12 np0005464214 podman[103784]: 2025-10-01 13:11:12.627664223 +0000 UTC m=+0.196103667 container attach 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:11:12 np0005464214 ceph-mgr[75103]: [progress INFO root] Writing back 12 completed events
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 09:11:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  1 09:11:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  1 09:11:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  1 09:11:13 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  1 09:11:13 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  1 09:11:13 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.f( v 41'5 lc 41'1 (0'0,41'5] local-lis/les=52/53 n=3 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:13 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.3( v 41'2 lc 0'0 (0'0,41'2] local-lis/les=52/53 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:13 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.b( v 41'3 lc 0'0 (0'0,41'3] local-lis/les=52/53 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:13 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.7( v 41'2 lc 41'1 (0'0,41'2] local-lis/les=52/53 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:13 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]: {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:    "0": [
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:        {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "devices": [
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "/dev/loop3"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            ],
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_name": "ceph_lv0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_size": "21470642176",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "name": "ceph_lv0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "tags": {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cluster_name": "ceph",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.crush_device_class": "",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.encrypted": "0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osd_id": "0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.type": "block",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.vdo": "0"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            },
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "type": "block",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "vg_name": "ceph_vg0"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:        }
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:    ],
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:    "1": [
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:        {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "devices": [
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "/dev/loop4"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            ],
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_name": "ceph_lv1",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_size": "21470642176",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "name": "ceph_lv1",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "tags": {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cluster_name": "ceph",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.crush_device_class": "",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.encrypted": "0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osd_id": "1",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.type": "block",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.vdo": "0"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            },
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "type": "block",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "vg_name": "ceph_vg1"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:        }
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:    ],
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:    "2": [
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:        {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "devices": [
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "/dev/loop5"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            ],
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_name": "ceph_lv2",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_size": "21470642176",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "name": "ceph_lv2",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "tags": {
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.cluster_name": "ceph",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.crush_device_class": "",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.encrypted": "0",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osd_id": "2",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.type": "block",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:                "ceph.vdo": "0"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            },
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "type": "block",
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:            "vg_name": "ceph_vg2"
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:        }
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]:    ]
Oct  1 09:11:13 np0005464214 quirky_archimedes[103801]: }
Oct  1 09:11:13 np0005464214 podman[103784]: 2025-10-01 13:11:13.41356267 +0000 UTC m=+0.982002094 container died 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  1 09:11:13 np0005464214 systemd[1]: libpod-93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1.scope: Deactivated successfully.
Oct  1 09:11:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76-merged.mount: Deactivated successfully.
Oct  1 09:11:13 np0005464214 podman[103784]: 2025-10-01 13:11:13.49464069 +0000 UTC m=+1.063080114 container remove 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:13 np0005464214 systemd[1]: libpod-conmon-93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1.scope: Deactivated successfully.
Oct  1 09:11:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v117: 181 pgs: 4 peering, 177 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 199 B/s, 2 keys/s, 3 objects/s recovering
Oct  1 09:11:14 np0005464214 podman[103967]: 2025-10-01 13:11:14.357048618 +0000 UTC m=+0.058505523 container create 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:11:14 np0005464214 systemd[1]: Started libpod-conmon-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope.
Oct  1 09:11:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:14 np0005464214 podman[103967]: 2025-10-01 13:11:14.337460892 +0000 UTC m=+0.038917787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:14 np0005464214 podman[103967]: 2025-10-01 13:11:14.445461103 +0000 UTC m=+0.146918018 container init 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:11:14 np0005464214 podman[103967]: 2025-10-01 13:11:14.452776086 +0000 UTC m=+0.154232961 container start 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:14 np0005464214 podman[103967]: 2025-10-01 13:11:14.455895611 +0000 UTC m=+0.157352536 container attach 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:11:14 np0005464214 laughing_pascal[103983]: 167 167
Oct  1 09:11:14 np0005464214 systemd[1]: libpod-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope: Deactivated successfully.
Oct  1 09:11:14 np0005464214 conmon[103983]: conmon 55a4ac52b654192137c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope/container/memory.events
Oct  1 09:11:14 np0005464214 podman[103988]: 2025-10-01 13:11:14.523222982 +0000 UTC m=+0.045166467 container died 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:11:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7d073172abde0bcbb35de0b6dfc831f3a4e0fd7f5f97e48031c04ecf1742a483-merged.mount: Deactivated successfully.
Oct  1 09:11:14 np0005464214 podman[103988]: 2025-10-01 13:11:14.574804263 +0000 UTC m=+0.096747698 container remove 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:11:14 np0005464214 systemd[1]: libpod-conmon-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope: Deactivated successfully.
Oct  1 09:11:14 np0005464214 podman[104009]: 2025-10-01 13:11:14.822257494 +0000 UTC m=+0.066036343 container create fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:11:14 np0005464214 systemd[1]: Started libpod-conmon-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope.
Oct  1 09:11:14 np0005464214 podman[104009]: 2025-10-01 13:11:14.797352675 +0000 UTC m=+0.041131534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:11:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:14 np0005464214 podman[104009]: 2025-10-01 13:11:14.919120845 +0000 UTC m=+0.162899754 container init fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:14 np0005464214 podman[104009]: 2025-10-01 13:11:14.932730661 +0000 UTC m=+0.176509510 container start fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:11:14 np0005464214 podman[104009]: 2025-10-01 13:11:14.942021653 +0000 UTC m=+0.185800502 container attach fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:11:15 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  1 09:11:15 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  1 09:11:15 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct  1 09:11:15 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct  1 09:11:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v118: 181 pgs: 4 peering, 177 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 139 B/s, 1 keys/s, 2 objects/s recovering
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]: {
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "osd_id": 0,
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "type": "bluestore"
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:    },
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "osd_id": 2,
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "type": "bluestore"
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:    },
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "osd_id": 1,
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:        "type": "bluestore"
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]:    }
Oct  1 09:11:15 np0005464214 upbeat_northcutt[104026]: }
Oct  1 09:11:15 np0005464214 systemd[1]: libpod-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope: Deactivated successfully.
Oct  1 09:11:15 np0005464214 systemd[1]: libpod-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope: Consumed 1.034s CPU time.
Oct  1 09:11:15 np0005464214 podman[104009]: 2025-10-01 13:11:15.965349956 +0000 UTC m=+1.209128825 container died fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:11:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88-merged.mount: Deactivated successfully.
Oct  1 09:11:16 np0005464214 podman[104009]: 2025-10-01 13:11:16.054953586 +0000 UTC m=+1.298732425 container remove fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:11:16 np0005464214 systemd[1]: libpod-conmon-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope: Deactivated successfully.
Oct  1 09:11:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:11:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:11:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:16 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d36144a5-4421-46cb-b1e6-efcbd3d256dc does not exist
Oct  1 09:11:16 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 67c471be-7b80-4f27-82c8-d1e1187df4ce does not exist
Oct  1 09:11:16 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  1 09:11:16 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  1 09:11:16 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  1 09:11:16 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  1 09:11:17 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct  1 09:11:17 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct  1 09:11:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:17 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  1 09:11:17 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  1 09:11:17 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct  1 09:11:17 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v119: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 213 B/s, 2 keys/s, 2 objects/s recovering
Oct  1 09:11:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  1 09:11:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:11:18 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct  1 09:11:18 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct  1 09:11:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  1 09:11:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  1 09:11:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  1 09:11:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  1 09:11:18 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  1 09:11:18 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  1 09:11:18 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  1 09:11:19 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.c( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.630481720s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'2 lcod 41'1 mlcod 41'1 active pruub 92.860504150s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:19 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.c( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.630410194s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'2 lcod 41'1 mlcod 0'0 unknown NOTIFY pruub 92.860504150s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:19 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.4( v 41'6 (0'0,41'6] local-lis/les=37/39 n=4 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.624657631s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'6 lcod 41'5 mlcod 41'5 active pruub 92.855270386s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:19 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.4( v 41'6 (0'0,41'6] local-lis/les=37/39 n=4 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.624556541s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'6 lcod 41'5 mlcod 0'0 unknown NOTIFY pruub 92.855270386s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:19 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 54 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:19 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 54 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  1 09:11:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  1 09:11:19 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  1 09:11:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  1 09:11:19 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 55 pg[6.4( v 41'6 lc 41'1 (0'0,41'6] local-lis/les=54/55 n=4 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=41'6 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:19 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 55 pg[6.c( v 41'2 lc 41'1 (0'0,41'2] local-lis/les=54/55 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=41'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:19 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Oct  1 09:11:19 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Oct  1 09:11:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v122: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 95 B/s, 1 keys/s, 1 objects/s recovering
Oct  1 09:11:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  1 09:11:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  1 09:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  1 09:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  1 09:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  1 09:11:20 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  1 09:11:20 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  1 09:11:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  1 09:11:20 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  1 09:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  1 09:11:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v124: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Oct  1 09:11:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  1 09:11:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  1 09:11:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  1 09:11:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  1 09:11:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  1 09:11:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  1 09:11:22 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  1 09:11:22 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 56 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.446735382s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 41'3 active pruub 92.188621521s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:22 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 56 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.444850922s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 41'3 active pruub 92.187164307s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:22 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 57 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.444769859s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 92.187164307s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:22 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 57 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.445859909s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 92.188621521s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:22 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:22 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:23 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  1 09:11:23 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  1 09:11:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  1 09:11:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  1 09:11:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  1 09:11:23 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  1 09:11:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 58 pg[6.d( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 58 pg[6.5( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v127: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 343 B/s, 1 objects/s recovering
Oct  1 09:11:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  1 09:11:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  1 09:11:24 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct  1 09:11:24 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct  1 09:11:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  1 09:11:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  1 09:11:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  1 09:11:24 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  1 09:11:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  1 09:11:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  1 09:11:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v129: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 282 B/s, 0 objects/s recovering
Oct  1 09:11:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  1 09:11:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  1 09:11:26 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  1 09:11:26 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  1 09:11:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  1 09:11:26 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  1 09:11:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  1 09:11:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  1 09:11:26 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  1 09:11:27 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  1 09:11:27 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  1 09:11:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  1 09:11:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v131: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct  1 09:11:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  1 09:11:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  1 09:11:27 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 60 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60 pruub=8.919249535s) [2] r=-1 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 active pruub 100.855567932s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:27 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 60 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60 pruub=8.919156075s) [2] r=-1 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.855567932s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 60 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60) [2] r=0 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:28 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct  1 09:11:28 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct  1 09:11:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  1 09:11:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  1 09:11:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  1 09:11:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  1 09:11:28 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  1 09:11:28 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 61 pg[6.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61 pruub=12.707841873s) [0] r=-1 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 active pruub 100.188720703s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:28 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 61 pg[6.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61 pruub=12.707711220s) [0] r=-1 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.188720703s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:28 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 61 pg[6.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:28 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 61 pg[6.8( empty local-lis/les=60/61 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60) [2] r=0 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  1 09:11:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  1 09:11:29 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  1 09:11:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  1 09:11:29 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 62 pg[6.9( empty local-lis/les=61/62 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v134: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct  1 09:11:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  1 09:11:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  1 09:11:30 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct  1 09:11:30 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct  1 09:11:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  1 09:11:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  1 09:11:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  1 09:11:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  1 09:11:30 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  1 09:11:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:31 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct  1 09:11:31 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct  1 09:11:31 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  1 09:11:31 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  1 09:11:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  1 09:11:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v136: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct  1 09:11:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  1 09:11:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  1 09:11:31 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 63 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=50/51 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63 pruub=11.162081718s) [0] r=-1 lpr=63 pi=[50,63)/1 crt=41'1 lcod 0'0 mlcod 0'0 active pruub 102.199279785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:31 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 63 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=50/51 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63 pruub=11.161931992s) [0] r=-1 lpr=63 pi=[50,63)/1 crt=41'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.199279785s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:31 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 63 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:32 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct  1 09:11:32 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct  1 09:11:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  1 09:11:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  1 09:11:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  1 09:11:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  1 09:11:32 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  1 09:11:32 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 64 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=52/53 n=1 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=12.700857162s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=41'3 mlcod 41'3 active pruub 109.230659485s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:32 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 64 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=52/53 n=1 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=12.700545311s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 109.230659485s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:32 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 64 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=63/64 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=41'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:32 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 64 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:33 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct  1 09:11:33 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct  1 09:11:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  1 09:11:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  1 09:11:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  1 09:11:33 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  1 09:11:33 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 65 pg[6.b( v 41'3 lc 0'0 (0'0,41'3] local-lis/les=64/65 n=1 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=41'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v139: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:11:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  1 09:11:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  1 09:11:34 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Oct  1 09:11:34 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Oct  1 09:11:34 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  1 09:11:34 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  1 09:11:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  1 09:11:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  1 09:11:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  1 09:11:34 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  1 09:11:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  1 09:11:35 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct  1 09:11:35 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct  1 09:11:35 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  1 09:11:35 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  1 09:11:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  1 09:11:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v141: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:11:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  1 09:11:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  1 09:11:36 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  1 09:11:36 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  1 09:11:36 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct  1 09:11:36 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct  1 09:11:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  1 09:11:36 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  1 09:11:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  1 09:11:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  1 09:11:36 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  1 09:11:37 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  1 09:11:37 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  1 09:11:37 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  1 09:11:37 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  1 09:11:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  1 09:11:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v143: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 09:11:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  1 09:11:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  1 09:11:37 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 67 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67 pruub=9.383437157s) [1] r=-1 lpr=67 pi=[56,67)/1 crt=41'3 mlcod 41'3 active pruub 111.416503906s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:37 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 67 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67 pruub=9.383358002s) [1] r=-1 lpr=67 pi=[56,67)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 111.416503906s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:37 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 67 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:38 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Oct  1 09:11:38 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Oct  1 09:11:38 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  1 09:11:38 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  1 09:11:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  1 09:11:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  1 09:11:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  1 09:11:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  1 09:11:38 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  1 09:11:38 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 68 pg[6.d( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=67/68 n=2 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:39 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  1 09:11:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v145: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 09:11:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  1 09:11:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  1 09:11:40 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct  1 09:11:40 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct  1 09:11:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  1 09:11:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  1 09:11:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  1 09:11:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  1 09:11:40 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  1 09:11:40 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 69 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=52/53 n=3 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69 pruub=12.476483345s) [2] r=-1 lpr=69 pi=[52,69)/1 crt=41'5 mlcod 41'5 active pruub 117.227127075s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:40 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 69 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=52/53 n=3 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69 pruub=12.476371765s) [2] r=-1 lpr=69 pi=[52,69)/1 crt=41'5 mlcod 0'0 unknown NOTIFY pruub 117.227127075s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:11:40 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 69 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69) [2] r=0 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:41 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  1 09:11:41 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  1 09:11:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  1 09:11:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  1 09:11:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  1 09:11:41 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  1 09:11:41 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 70 pg[6.f( v 41'5 lc 41'1 (0'0,41'5] local-lis/les=69/70 n=3 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69) [2] r=0 lpr=69 pi=[52,69)/1 crt=41'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v148: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 09:11:42 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct  1 09:11:42 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct  1 09:11:43 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  1 09:11:43 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  1 09:11:43 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  1 09:11:43 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  1 09:11:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v149: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Oct  1 09:11:44 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  1 09:11:44 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  1 09:11:45 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  1 09:11:45 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  1 09:11:45 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  1 09:11:45 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  1 09:11:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v150: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 0 objects/s recovering
Oct  1 09:11:46 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct  1 09:11:46 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct  1 09:11:47 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  1 09:11:47 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:11:47
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v151: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 0 objects/s recovering
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:11:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:11:49 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Oct  1 09:11:49 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Oct  1 09:11:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v152: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 0 objects/s recovering
Oct  1 09:11:50 np0005464214 python3[104150]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:11:50 np0005464214 podman[104151]: 2025-10-01 13:11:50.369635911 +0000 UTC m=+0.057176630 container create bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:11:50 np0005464214 systemd[1]: Started libpod-conmon-bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1.scope.
Oct  1 09:11:50 np0005464214 podman[104151]: 2025-10-01 13:11:50.339870339 +0000 UTC m=+0.027411108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:11:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81df86771aff73a1b18386ebbbeae98e3b99d453f1edfde32f853c1e770c7a21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81df86771aff73a1b18386ebbbeae98e3b99d453f1edfde32f853c1e770c7a21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:50 np0005464214 podman[104151]: 2025-10-01 13:11:50.479518506 +0000 UTC m=+0.167059245 container init bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:11:50 np0005464214 podman[104151]: 2025-10-01 13:11:50.48994439 +0000 UTC m=+0.177485069 container start bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:11:50 np0005464214 podman[104151]: 2025-10-01 13:11:50.49339405 +0000 UTC m=+0.180934809 container attach bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:11:50 np0005464214 dazzling_austin[104166]: could not fetch user info: no user info saved
Oct  1 09:11:50 np0005464214 systemd[1]: libpod-bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1.scope: Deactivated successfully.
Oct  1 09:11:50 np0005464214 podman[104251]: 2025-10-01 13:11:50.780181375 +0000 UTC m=+0.031817639 container died bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:11:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-81df86771aff73a1b18386ebbbeae98e3b99d453f1edfde32f853c1e770c7a21-merged.mount: Deactivated successfully.
Oct  1 09:11:50 np0005464214 podman[104251]: 2025-10-01 13:11:50.833429609 +0000 UTC m=+0.085065823 container remove bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:11:50 np0005464214 systemd[1]: libpod-conmon-bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1.scope: Deactivated successfully.
Oct  1 09:11:51 np0005464214 python3[104291]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:11:51 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Oct  1 09:11:51 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Oct  1 09:11:51 np0005464214 podman[104292]: 2025-10-01 13:11:51.318350982 +0000 UTC m=+0.043895756 container create 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:11:51 np0005464214 systemd[1]: Started libpod-conmon-1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69.scope.
Oct  1 09:11:51 np0005464214 podman[104292]: 2025-10-01 13:11:51.298925381 +0000 UTC m=+0.024470245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  1 09:11:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:11:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d400193d2f521256c718bd34fc038e2731419a2761b24e6fa11a42489f799b4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d400193d2f521256c718bd34fc038e2731419a2761b24e6fa11a42489f799b4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:11:51 np0005464214 podman[104292]: 2025-10-01 13:11:51.446538773 +0000 UTC m=+0.172083597 container init 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:11:51 np0005464214 podman[104292]: 2025-10-01 13:11:51.454309771 +0000 UTC m=+0.179854555 container start 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:11:51 np0005464214 podman[104292]: 2025-10-01 13:11:51.457797673 +0000 UTC m=+0.183342537 container attach 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]: {
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "user_id": "openstack",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "display_name": "openstack",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "email": "",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "suspended": 0,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "max_buckets": 1000,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "subusers": [],
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "keys": [
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        {
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:            "user": "openstack",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:            "access_key": "9YAP2ZLHAPGVIL9ZU6WF",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:            "secret_key": "rF7sUO0A5DaWbo1mPhff1hc6i3JP5EljOueYTCnc"
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        }
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    ],
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "swift_keys": [],
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "caps": [],
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "op_mask": "read, write, delete",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "default_placement": "",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "default_storage_class": "",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "placement_tags": [],
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "bucket_quota": {
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "enabled": false,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "check_on_raw": false,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "max_size": -1,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "max_size_kb": 0,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "max_objects": -1
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    },
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "user_quota": {
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "enabled": false,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "check_on_raw": false,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "max_size": -1,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "max_size_kb": 0,
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:        "max_objects": -1
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    },
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "temp_url_keys": [],
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "type": "rgw",
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]:    "mfa_ids": []
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]: }
Oct  1 09:11:51 np0005464214 beautiful_saha[104306]: 
Oct  1 09:11:51 np0005464214 systemd[1]: libpod-1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69.scope: Deactivated successfully.
Oct  1 09:11:51 np0005464214 podman[104391]: 2025-10-01 13:11:51.741026574 +0000 UTC m=+0.041666964 container died 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:11:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v153: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Oct  1 09:11:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d400193d2f521256c718bd34fc038e2731419a2761b24e6fa11a42489f799b4c-merged.mount: Deactivated successfully.
Oct  1 09:11:51 np0005464214 podman[104391]: 2025-10-01 13:11:51.802001645 +0000 UTC m=+0.102642025 container remove 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:11:51 np0005464214 systemd[1]: libpod-conmon-1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69.scope: Deactivated successfully.
Oct  1 09:11:52 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  1 09:11:52 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  1 09:11:53 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct  1 09:11:53 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v154: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s; 56 B/s, 0 objects/s recovering
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  1 09:11:53 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev c7c0e5d0-af8b-4a47-b75c-2afe630deb55 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:11:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:54 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct  1 09:11:54 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct  1 09:11:54 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct  1 09:11:54 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  1 09:11:54 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev c03bb0ae-eb08-41a5-b304-87f964af89ac (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:11:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:11:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v157: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  1 09:11:55 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev 07824996-d13c-4845-926d-a95fdc21b6a1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  1 09:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  1 09:11:56 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] update: starting ev df1830a6-f900-422d-bce0-e21f9b42868d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev c7c0e5d0-af8b-4a47-b75c-2afe630deb55 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event c7c0e5d0-af8b-4a47-b75c-2afe630deb55 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev c03bb0ae-eb08-41a5-b304-87f964af89ac (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event c03bb0ae-eb08-41a5-b304-87f964af89ac (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev 07824996-d13c-4845-926d-a95fdc21b6a1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event 07824996-d13c-4845-926d-a95fdc21b6a1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] complete: finished ev df1830a6-f900-422d-bce0-e21f9b42868d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  1 09:11:56 np0005464214 ceph-mgr[75103]: [progress INFO root] Completed event df1830a6-f900-422d-bce0-e21f9b42868d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 73 pg[9.0( v 70'389 (0'0,70'389] local-lis/les=41/42 n=177 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=73 pruub=14.469636917s) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 70'388 mlcod 70'388 active pruub 130.869049072s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 73 pg[8.0( v 40'4 (0'0,40'4] local-lis/les=39/40 n=4 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=73 pruub=12.452169418s) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 40'3 mlcod 40'3 active pruub 128.851531982s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.0( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=73 pruub=12.452169418s) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 40'3 mlcod 0'0 unknown pruub 128.851531982s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.0( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=73 pruub=14.469636917s) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 70'388 mlcod 0'0 unknown pruub 130.869049072s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.9( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.7( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.5( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.17( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.8( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.3( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.a( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.e( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.b( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.f( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.2( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.16( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.d( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.c( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.14( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.11( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.6( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.15( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.12( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.4( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.13( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.10( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.18( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.19( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1a( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1b( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1c( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1d( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1e( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1f( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.4( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.8( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.a( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.5( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.11( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.13( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.b( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.3( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.10( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.12( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.7( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.2( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.6( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.e( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.15( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.14( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.c( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1( v 40'4 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.16( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.17( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.18( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.19( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1a( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1b( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1c( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1e( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.9( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  1 09:11:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v160: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  1 09:11:57 np0005464214 ceph-mgr[75103]: [progress INFO root] Writing back 16 completed events
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[11.0( v 70'2 (0'0,70'2] local-lis/les=45/46 n=2 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=9.853425026s) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 70'1 mlcod 70'1 active pruub 126.967704773s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[11.0( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=9.853425026s) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 70'1 mlcod 0'0 unknown pruub 126.967704773s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.14( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.16( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.0( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 70'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.17( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.3( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.2( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.8( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.a( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.0( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 40'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.7( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.5( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.4( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1a( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.19( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.10( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.12( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.13( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:57 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 75 pg[10.0( v 70'64 (0'0,70'64] local-lis/les=43/44 n=8 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75 pruub=15.750879288s) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 70'63 mlcod 70'63 active pruub 127.946723938s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 75 pg[10.0( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75 pruub=15.750879288s) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 70'63 mlcod 0'0 unknown pruub 127.946723938s@ mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  1 09:11:58 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct  1 09:11:58 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct  1 09:11:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  1 09:11:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:11:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  1 09:11:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  1 09:11:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.16( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.17( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.15( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.14( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.13( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.2( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=45/46 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.f( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.e( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.d( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.b( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.c( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.8( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.a( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.3( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.5( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.4( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.6( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.7( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.18( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1a( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1b( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1c( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1d( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1e( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1f( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.11( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.12( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.9( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1e( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.19( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1b( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.b( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.10( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.a( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.19( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.d( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.11( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.13( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.12( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.10( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1f( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1c( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1a( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.18( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1d( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.6( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.5( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.4( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.8( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.f( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.7( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.9( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.16( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.c( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.e( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.2( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.3( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.14( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.15( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.16( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.17( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.13( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.0( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 70'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:58 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.5( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.7( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.9( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.d( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1c( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.18( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1d( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.5( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.c( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.0( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 70'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.3( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.14( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.15( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.9( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:11:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:00 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  1 09:12:00 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  1 09:12:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:00 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  1 09:12:00 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  1 09:12:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:01 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  1 09:12:01 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  1 09:12:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct  1 09:12:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct  1 09:12:02 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct  1 09:12:02 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct  1 09:12:03 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct  1 09:12:03 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct  1 09:12:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:12:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  1 09:12:04 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968307495s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150558472s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968185425s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150558472s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951469421s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.133880615s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951457024s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.133880615s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951417923s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.133880615s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951336861s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.133880615s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968358994s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151153564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.943604469s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.126419067s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968303680s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151153564s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.943548203s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.126419067s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950979233s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.133895874s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950869560s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.133895874s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.967068672s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150268555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.967015266s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150268555s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951047897s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.134445190s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951011658s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134445190s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966702461s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150283813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966644287s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150283813s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966640472s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150650024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950768471s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.134841919s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966590881s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150650024s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950712204s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134841919s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950326920s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.134735107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950283051s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134735107s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965979576s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150711060s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965918541s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150711060s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950703621s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135711670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950655937s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135711670s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949704170s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.134857178s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949651718s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134857178s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965412140s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150772095s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965373993s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150772095s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949386597s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.134887695s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949323654s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134887695s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964961052s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150726318s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964921951s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150726318s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949278831s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.135162354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949225426s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135162354s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964778900s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150802612s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964682579s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150802612s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948793411s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.135162354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948756218s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135162354s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947757721s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.134475708s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947700500s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134475708s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948555946s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135604858s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947916031s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135025024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963786125s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150909424s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947872162s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135025024s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948415756s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135604858s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963736534s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150909424s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948323250s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135681152s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948291779s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135681152s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948054314s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135711670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963291168s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151000977s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948015213s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135711670s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963233948s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151000977s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963236809s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151077271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963199615s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151077271s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948027611s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136093140s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948133469s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136215210s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948172569s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136291504s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948074341s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136215210s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948130608s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136291504s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962885857s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151153564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962850571s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151153564s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947865486s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136337280s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948077202s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136581421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947834015s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136337280s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948032379s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136581421s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947632790s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136367798s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962381363s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151123047s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947601318s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136367798s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962336540s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151123047s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947252274s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136093140s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962396622s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151229858s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962006569s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151168823s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962041855s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151229858s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947217941s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136489868s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947071075s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136535645s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.946246147s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136489868s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.946340561s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136535645s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.960991859s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151168823s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.946080208s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137268066s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945987701s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137268066s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959937096s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151321411s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959891319s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151321411s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959763527s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151214600s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959650040s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151214600s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959566116s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151290894s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945618629s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137359619s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959533691s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151290894s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945562363s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137359619s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945498466s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137145996s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945451736s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137466431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945230484s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137145996s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945377350s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137466431s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945072174s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137496948s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945087433s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137573242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945037842s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137573242s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944945335s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137496948s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944779396s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137619019s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944636345s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137619019s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944744110s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137680054s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.958064079s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151336670s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.9( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957771301s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151351929s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.9( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957732201s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151351929s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.958518028s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151321411s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957483292s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151321411s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.11( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957916260s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151336670s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944570541s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137680054s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.943033218s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137680054s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.942840576s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137680054s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.10( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.942263603s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137725830s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.955919266s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151412964s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.5( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.955835342s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151412964s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.942021370s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137725830s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.941924095s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137832642s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.941827774s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137832642s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.b( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.955692291s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151367188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.4( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.15( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.954754829s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151367188s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.15( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.14( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.2( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.6( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.3( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.2( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.9( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.6( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.d( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.8( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.d( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.9( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.4( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1b( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1c( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1e( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.f( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.12( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.e( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.12( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.b( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.18( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.1b( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.e( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.f( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.c( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1a( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1f( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.1c( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.1( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.11( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.3( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.11( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.941148758s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.193389893s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.941083908s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.193389893s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.940863609s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.193237305s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.947177887s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199981689s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.947136879s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199981689s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.d( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.946795464s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.199844360s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.d( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.946720123s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.199844360s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.18( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.940765381s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.193237305s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.17( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945632935s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199996948s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945360184s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199844360s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945519447s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199996948s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944979668s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199783325s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944931984s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199783325s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945047379s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200057983s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944984436s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200057983s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945320129s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199844360s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944793701s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200164795s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.14( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944931984s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200515747s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944884300s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200515747s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944594383s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200347900s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944548607s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200347900s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.1f( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945033073s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.201034546s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944325447s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200469971s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944270134s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200469971s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944096565s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200164795s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944956779s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.201202393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944915771s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.201202393s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.1d( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944943428s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.201034546s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.e( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943584442s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.200607300s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943605423s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200698853s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.e( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943493843s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.200607300s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943561554s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200698853s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943550110s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200729370s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943508148s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200729370s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.14( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943443298s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.200836182s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.14( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943375587s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.200836182s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943263054s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200897217s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943226814s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200897217s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.15( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943150520s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.200912476s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.15( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.942918777s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.200912476s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.9( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943267822s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.201049805s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.9( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.942856789s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.201049805s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.19( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.11( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.1a( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943011284s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200988770s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.941424370s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200988770s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.10( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.10( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.12( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.1a( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.8( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.2( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.4( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.1( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.14( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.9( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  1 09:12:04 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.11( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.11( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.3( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.3( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.5( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.5( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=77/78 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.9( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.d( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=77/78 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=77/78 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.14( v 76'65 lc 70'54 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.9( v 76'65 lc 70'56 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.15( v 76'65 lc 70'46 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.d( v 76'65 lc 70'50 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.e( v 76'65 lc 70'48 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=77/78 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts
Oct  1 09:12:05 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok
Oct  1 09:12:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  1 09:12:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  1 09:12:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  1 09:12:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  1 09:12:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  1 09:12:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  1 09:12:06 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  1 09:12:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  1 09:12:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  1 09:12:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  1 09:12:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  1 09:12:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.944107056s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227615356s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.944192886s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227813721s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943981171s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227615356s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.944096565s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227813721s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943475723s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227645874s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943406105s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227645874s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943039894s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227432251s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942906380s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227432251s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942847252s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227722168s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941985130s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227172852s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941921234s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227172852s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942126274s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227569580s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941932678s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227569580s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941720963s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227447510s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941661835s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227447510s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942129135s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227722168s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941451073s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227752686s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941294670s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227722168s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941205978s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227722168s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940401077s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227157593s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941008568s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227752686s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940311432s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227157593s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940115929s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227111816s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940026283s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227111816s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940593719s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227874756s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940309525s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227096558s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940491676s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227874756s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940132141s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227661133s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940086365s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227661133s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.939704895s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227096558s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.933441162s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.221206665s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.933380127s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.221206665s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:07 np0005464214 systemd-logind[818]: New session 34 of user zuul.
Oct  1 09:12:07 np0005464214 systemd[1]: Started Session 34 of User zuul.
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct  1 09:12:07 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct  1 09:12:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 1001 B/s, 24 objects/s recovering
Oct  1 09:12:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  1 09:12:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  1 09:12:08 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Oct  1 09:12:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct  1 09:12:08 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct  1 09:12:08 np0005464214 python3.9[104561]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:12:09 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  1 09:12:09 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  1 09:12:09 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct  1 09:12:09 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct  1 09:12:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 848 B/s, 20 objects/s recovering
Oct  1 09:12:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:10 np0005464214 python3.9[104779]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:12:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  1 09:12:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  1 09:12:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 667 B/s, 16 objects/s recovering
Oct  1 09:12:12 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  1 09:12:12 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  1 09:12:13 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct  1 09:12:13 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct  1 09:12:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 519 B/s, 12 objects/s recovering
Oct  1 09:12:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  1 09:12:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  1 09:12:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  1 09:12:14 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  1 09:12:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  1 09:12:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  1 09:12:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  1 09:12:14 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct  1 09:12:14 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct  1 09:12:14 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct  1 09:12:14 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct  1 09:12:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  1 09:12:15 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Oct  1 09:12:15 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Oct  1 09:12:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  1 09:12:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  1 09:12:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  1 09:12:16 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  1 09:12:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  1 09:12:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  1 09:12:16 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  1 09:12:16 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct  1 09:12:16 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f4946727-9f61-4e83-8ac5-fe8cc7c5ff30 does not exist
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ac4532e0-3dbb-4be0-852f-02cd916e9dbd does not exist
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e7a6cdfd-3696-466a-9d6d-2d77f9433e2f does not exist
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:12:17 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  1 09:12:17 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.734802778 +0000 UTC m=+0.054842126 container create 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  1 09:12:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  1 09:12:17 np0005464214 systemd[1]: Started libpod-conmon-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope.
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.704994304 +0000 UTC m=+0.025033692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:12:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.820260551 +0000 UTC m=+0.140299979 container init 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.82585762 +0000 UTC m=+0.145896968 container start 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.828789034 +0000 UTC m=+0.148828472 container attach 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:12:17 np0005464214 competent_chatterjee[105123]: 167 167
Oct  1 09:12:17 np0005464214 systemd[1]: libpod-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope: Deactivated successfully.
Oct  1 09:12:17 np0005464214 conmon[105123]: conmon 91864f5a045eaba4211f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope/container/memory.events
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.83301066 +0000 UTC m=+0.153050008 container died 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:12:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d652e686bf8b6f8faac64841a5821f552b1c37b2a66b53293ebcd4d9f0405db2-merged.mount: Deactivated successfully.
Oct  1 09:12:17 np0005464214 systemd-logind[818]: Session 34 logged out. Waiting for processes to exit.
Oct  1 09:12:17 np0005464214 systemd[1]: session-34.scope: Deactivated successfully.
Oct  1 09:12:17 np0005464214 systemd[1]: session-34.scope: Consumed 8.571s CPU time.
Oct  1 09:12:17 np0005464214 systemd-logind[818]: Removed session 34.
Oct  1 09:12:17 np0005464214 podman[105107]: 2025-10-01 13:12:17.869556309 +0000 UTC m=+0.189595667 container remove 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:12:17 np0005464214 systemd[1]: libpod-conmon-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope: Deactivated successfully.
Oct  1 09:12:17 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Oct  1 09:12:17 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Oct  1 09:12:18 np0005464214 podman[105145]: 2025-10-01 13:12:18.103456402 +0000 UTC m=+0.086942683 container create 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:18 np0005464214 podman[105145]: 2025-10-01 13:12:18.043696109 +0000 UTC m=+0.027182420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:12:18 np0005464214 systemd[1]: Started libpod-conmon-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope.
Oct  1 09:12:18 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:12:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  1 09:12:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  1 09:12:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  1 09:12:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  1 09:12:18 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  1 09:12:18 np0005464214 podman[105145]: 2025-10-01 13:12:18.230427664 +0000 UTC m=+0.213913985 container init 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:12:18 np0005464214 podman[105145]: 2025-10-01 13:12:18.245249697 +0000 UTC m=+0.228735928 container start 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:12:18 np0005464214 podman[105145]: 2025-10-01 13:12:18.248370567 +0000 UTC m=+0.231857238 container attach 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:12:18 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct  1 09:12:18 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct  1 09:12:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  1 09:12:19 np0005464214 optimistic_bouman[105163]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:12:19 np0005464214 optimistic_bouman[105163]: --> relative data size: 1.0
Oct  1 09:12:19 np0005464214 optimistic_bouman[105163]: --> All data devices are unavailable
Oct  1 09:12:19 np0005464214 systemd[1]: libpod-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope: Deactivated successfully.
Oct  1 09:12:19 np0005464214 podman[105145]: 2025-10-01 13:12:19.354311588 +0000 UTC m=+1.337797859 container died 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:19 np0005464214 systemd[1]: libpod-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope: Consumed 1.057s CPU time.
Oct  1 09:12:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253-merged.mount: Deactivated successfully.
Oct  1 09:12:19 np0005464214 podman[105145]: 2025-10-01 13:12:19.422848331 +0000 UTC m=+1.406334582 container remove 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:12:19 np0005464214 systemd[1]: libpod-conmon-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope: Deactivated successfully.
Oct  1 09:12:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  1 09:12:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.199878639 +0000 UTC m=+0.050432924 container create c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:12:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  1 09:12:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  1 09:12:20 np0005464214 systemd[1]: Started libpod-conmon-c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837.scope.
Oct  1 09:12:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  1 09:12:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  1 09:12:20 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.175435137 +0000 UTC m=+0.025989452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:12:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.288147153 +0000 UTC m=+0.138701438 container init c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.295943763 +0000 UTC m=+0.146498078 container start c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:12:20 np0005464214 stoic_edison[105361]: 167 167
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.300410155 +0000 UTC m=+0.150964480 container attach c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:12:20 np0005464214 systemd[1]: libpod-c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837.scope: Deactivated successfully.
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.303913277 +0000 UTC m=+0.154467592 container died c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:12:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b7018ec60c02f03ba14f054ad60af82a2a0bf23c904c25d6e81914b8fa5f2402-merged.mount: Deactivated successfully.
Oct  1 09:12:20 np0005464214 podman[105345]: 2025-10-01 13:12:20.349536227 +0000 UTC m=+0.200090512 container remove c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:12:20 np0005464214 systemd[1]: libpod-conmon-c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837.scope: Deactivated successfully.
Oct  1 09:12:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:20 np0005464214 podman[105386]: 2025-10-01 13:12:20.586505108 +0000 UTC m=+0.062396187 container create fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:12:20 np0005464214 systemd[1]: Started libpod-conmon-fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3.scope.
Oct  1 09:12:20 np0005464214 podman[105386]: 2025-10-01 13:12:20.563998468 +0000 UTC m=+0.039889547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:12:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:12:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:20 np0005464214 podman[105386]: 2025-10-01 13:12:20.705235997 +0000 UTC m=+0.181127126 container init fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:12:20 np0005464214 podman[105386]: 2025-10-01 13:12:20.716313291 +0000 UTC m=+0.192204360 container start fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  1 09:12:20 np0005464214 podman[105386]: 2025-10-01 13:12:20.720903888 +0000 UTC m=+0.196794967 container attach fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:12:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  1 09:12:21 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct  1 09:12:21 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]: {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:    "0": [
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:        {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "devices": [
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "/dev/loop3"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            ],
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_name": "ceph_lv0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_size": "21470642176",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "name": "ceph_lv0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "tags": {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cluster_name": "ceph",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.crush_device_class": "",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.encrypted": "0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osd_id": "0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.type": "block",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.vdo": "0"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            },
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "type": "block",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "vg_name": "ceph_vg0"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:        }
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:    ],
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:    "1": [
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:        {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "devices": [
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "/dev/loop4"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            ],
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_name": "ceph_lv1",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_size": "21470642176",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "name": "ceph_lv1",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "tags": {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cluster_name": "ceph",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.crush_device_class": "",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.encrypted": "0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osd_id": "1",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.type": "block",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.vdo": "0"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            },
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "type": "block",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "vg_name": "ceph_vg1"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:        }
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:    ],
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:    "2": [
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:        {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "devices": [
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "/dev/loop5"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            ],
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_name": "ceph_lv2",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_size": "21470642176",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "name": "ceph_lv2",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "tags": {
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.cluster_name": "ceph",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.crush_device_class": "",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.encrypted": "0",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osd_id": "2",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.type": "block",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:                "ceph.vdo": "0"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            },
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "type": "block",
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:            "vg_name": "ceph_vg2"
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:        }
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]:    ]
Oct  1 09:12:21 np0005464214 clever_mcnulty[105402]: }
Oct  1 09:12:21 np0005464214 systemd[1]: libpod-fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3.scope: Deactivated successfully.
Oct  1 09:12:21 np0005464214 podman[105411]: 2025-10-01 13:12:21.566957774 +0000 UTC m=+0.034561667 container died fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1-merged.mount: Deactivated successfully.
Oct  1 09:12:21 np0005464214 podman[105411]: 2025-10-01 13:12:21.63872151 +0000 UTC m=+0.106325343 container remove fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:21 np0005464214 systemd[1]: libpod-conmon-fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3.scope: Deactivated successfully.
Oct  1 09:12:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Oct  1 09:12:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Oct  1 09:12:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  1 09:12:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  1 09:12:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  1 09:12:22 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  1 09:12:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  1 09:12:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  1 09:12:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.430719937 +0000 UTC m=+0.053614715 container create 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:12:22 np0005464214 systemd[1]: Started libpod-conmon-2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d.scope.
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.405069247 +0000 UTC m=+0.027964085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:12:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.521083578 +0000 UTC m=+0.143978426 container init 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.531420098 +0000 UTC m=+0.154314886 container start 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:12:22 np0005464214 bold_jones[105581]: 167 167
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.535350395 +0000 UTC m=+0.158245183 container attach 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:12:22 np0005464214 systemd[1]: libpod-2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d.scope: Deactivated successfully.
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.537370089 +0000 UTC m=+0.160264877 container died 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:12:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-423b2b325339e4ce00d90d6da4dc50db81d0448b0237167d79c67f52a85bd623-merged.mount: Deactivated successfully.
Oct  1 09:12:22 np0005464214 podman[105565]: 2025-10-01 13:12:22.58615354 +0000 UTC m=+0.209048338 container remove 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:22 np0005464214 systemd[1]: libpod-conmon-2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d.scope: Deactivated successfully.
Oct  1 09:12:22 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  1 09:12:22 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  1 09:12:22 np0005464214 podman[105608]: 2025-10-01 13:12:22.799881818 +0000 UTC m=+0.050983952 container create bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:12:22 np0005464214 systemd[1]: Started libpod-conmon-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope.
Oct  1 09:12:22 np0005464214 podman[105608]: 2025-10-01 13:12:22.778465452 +0000 UTC m=+0.029567596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:12:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:12:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:12:22 np0005464214 podman[105608]: 2025-10-01 13:12:22.916774207 +0000 UTC m=+0.167876331 container init bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:12:22 np0005464214 podman[105608]: 2025-10-01 13:12:22.924840905 +0000 UTC m=+0.175943039 container start bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:12:22 np0005464214 podman[105608]: 2025-10-01 13:12:22.928841073 +0000 UTC m=+0.179943197 container attach bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:22 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Oct  1 09:12:22 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Oct  1 09:12:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.672709465s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260330200s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.672541618s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260330200s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670915604s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260543823s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670770645s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260543823s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670593262s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260620117s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670250893s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260620117s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670630455s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260833740s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670126915s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260833740s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.538872719s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.134384155s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.538788795s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.134384155s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539843559s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.135955811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539805412s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.135955811s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.540073395s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.136550903s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539706230s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.136550903s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539821625s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.137420654s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:23 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539777756s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.137420654s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.6( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  1 09:12:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  1 09:12:23 np0005464214 confident_poitras[105625]: {
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "osd_id": 0,
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "type": "bluestore"
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:    },
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "osd_id": 2,
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "type": "bluestore"
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:    },
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "osd_id": 1,
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:        "type": "bluestore"
Oct  1 09:12:23 np0005464214 confident_poitras[105625]:    }
Oct  1 09:12:23 np0005464214 confident_poitras[105625]: }
Oct  1 09:12:24 np0005464214 systemd[1]: libpod-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope: Deactivated successfully.
Oct  1 09:12:24 np0005464214 podman[105608]: 2025-10-01 13:12:24.002580634 +0000 UTC m=+1.253682768 container died bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:12:24 np0005464214 systemd[1]: libpod-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope: Consumed 1.082s CPU time.
Oct  1 09:12:24 np0005464214 systemd[1]: var-lib-containers-storage-overlay-17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58-merged.mount: Deactivated successfully.
Oct  1 09:12:24 np0005464214 podman[105608]: 2025-10-01 13:12:24.107674095 +0000 UTC m=+1.358776229 container remove bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:12:24 np0005464214 systemd[1]: libpod-conmon-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope: Deactivated successfully.
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:12:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev dc0e9743-c86e-4bf8-a157-b1a8b9179b29 does not exist
Oct  1 09:12:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e92f1567-f0aa-4d2d-8561-efe1b5c800f1 does not exist
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  1 09:12:24 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.6( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.6( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667768478s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.136596680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667716026s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.136596680s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667521477s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.136795044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:24 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667475700s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.136795044s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.8( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2] r=0 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:24 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.18( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2] r=0 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.8( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.8( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.18( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.18( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.858527184s) [2] async=[2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.473541260s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.858265877s) [2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.473541260s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.866624832s) [2] async=[2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.482833862s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.866536140s) [2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.482833862s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89 pruub=15.858545303s) [2] async=[2] r=-1 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 70'389 active pruub 165.476104736s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89 pruub=15.858445168s) [2] r=-1 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476104736s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89) [2] r=0 lpr=89 pi=[80,89)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89) [2] r=0 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=88/89 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=88/89 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  1 09:12:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  1 09:12:26 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct  1 09:12:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  1 09:12:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  1 09:12:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  1 09:12:26 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  1 09:12:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.844839096s) [2] async=[2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 70'389 active pruub 165.476242065s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.844753265s) [2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476242065s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.842909813s) [2] async=[2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 70'389 active pruub 165.476150513s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842742920s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.483016968s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.842617989s) [2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476150513s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=88/89 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.991535187s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.632019043s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=88/89 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.991478920s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.632019043s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842116356s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.482788086s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842031479s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.482788086s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=88/89 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.991071701s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.632049561s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=88/89 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.990999222s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.632049561s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842623711s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.483016968s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.841970444s) [2] async=[2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 70'389 active pruub 165.476837158s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.841842651s) [2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476837158s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89) [2] r=0 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  1 09:12:27 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  1 09:12:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  1 09:12:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  1 09:12:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  1 09:12:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=90/91 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=90/91 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 285 B/s, 14 objects/s recovering
Oct  1 09:12:28 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  1 09:12:28 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  1 09:12:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 221 B/s, 11 objects/s recovering
Oct  1 09:12:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:31 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Oct  1 09:12:31 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Oct  1 09:12:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 8 objects/s recovering
Oct  1 09:12:32 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  1 09:12:32 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  1 09:12:32 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Oct  1 09:12:32 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Oct  1 09:12:32 np0005464214 systemd-logind[818]: New session 35 of user zuul.
Oct  1 09:12:33 np0005464214 systemd[1]: Started Session 35 of User zuul.
Oct  1 09:12:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 123 B/s, 6 objects/s recovering
Oct  1 09:12:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  1 09:12:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  1 09:12:33 np0005464214 python3.9[105877]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  1 09:12:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  1 09:12:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  1 09:12:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  1 09:12:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  1 09:12:34 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  1 09:12:34 np0005464214 python3.9[106053]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:12:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  1 09:12:35 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  1 09:12:35 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  1 09:12:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  1 09:12:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  1 09:12:36 np0005464214 python3.9[106209]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:12:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  1 09:12:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  1 09:12:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  1 09:12:36 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  1 09:12:36 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  1 09:12:37 np0005464214 python3.9[106362]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:12:37 np0005464214 systemd[1]: packagekit.service: Deactivated successfully.
Oct  1 09:12:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  1 09:12:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  1 09:12:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  1 09:12:37 np0005464214 python3.9[106516]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:12:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  1 09:12:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  1 09:12:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  1 09:12:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  1 09:12:38 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  1 09:12:38 np0005464214 python3.9[106666]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:12:38 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  1 09:12:38 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  1 09:12:38 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989083290s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 173.135879517s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:38 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989028931s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.135879517s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:38 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989823341s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 173.137756348s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:38 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989728928s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.137756348s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:38 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 94 pg[9.c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94) [2] r=0 lpr=94 pi=[73,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:38 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 94 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94) [2] r=0 lpr=94 pi=[73,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:39 np0005464214 network[106683]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:12:39 np0005464214 network[106684]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:12:39 np0005464214 network[106685]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:12:39 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  1 09:12:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  1 09:12:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  1 09:12:39 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  1 09:12:39 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct  1 09:12:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  1 09:12:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  1 09:12:39 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:39 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:39 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:39 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:39 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct  1 09:12:39 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:39 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:39 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:39 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:40 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct  1 09:12:40 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct  1 09:12:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  1 09:12:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  1 09:12:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  1 09:12:40 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  1 09:12:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  1 09:12:40 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 96 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] async=[2] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:40 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 96 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] async=[2] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:41 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  1 09:12:41 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  1 09:12:41 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct  1 09:12:41 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct  1 09:12:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  1 09:12:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  1 09:12:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  1 09:12:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  1 09:12:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  1 09:12:41 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  1 09:12:41 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:41 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:41 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:41 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:41 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.968115807s) [2] async=[2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 175.916351318s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:41 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.968032837s) [2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.916351318s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:41 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.970879555s) [2] async=[2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 175.919281006s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:41 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.970707893s) [2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.919281006s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  1 09:12:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  1 09:12:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  1 09:12:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  1 09:12:42 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  1 09:12:42 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 98 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:42 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 98 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:43 np0005464214 python3.9[106948]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:12:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct  1 09:12:43 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct  1 09:12:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  1 09:12:43 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct  1 09:12:44 np0005464214 python3.9[107098]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:12:44 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct  1 09:12:44 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct  1 09:12:45 np0005464214 python3.9[107252]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:12:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:45 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Oct  1 09:12:45 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Oct  1 09:12:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  1 09:12:46 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct  1 09:12:46 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct  1 09:12:46 np0005464214 python3.9[107410]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:12:47 np0005464214 python3.9[107494]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:12:47 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  1 09:12:47 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:12:47
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:12:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  1 09:12:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  1 09:12:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  1 09:12:48 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  1 09:12:48 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  1 09:12:49 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  1 09:12:49 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  1 09:12:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  1 09:12:49 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  1 09:12:50 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  1 09:12:50 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  1 09:12:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  1 09:12:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct  1 09:12:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  1 09:12:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  1 09:12:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  1 09:12:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  1 09:12:52 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  1 09:12:52 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  1 09:12:52 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct  1 09:12:52 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct  1 09:12:52 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  1 09:12:52 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  1 09:12:53 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  1 09:12:53 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  1 09:12:53 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  1 09:12:53 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  1 09:12:53 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  1 09:12:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct  1 09:12:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  1 09:12:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  1 09:12:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  1 09:12:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  1 09:12:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  1 09:12:54 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  1 09:12:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  1 09:12:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:12:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct  1 09:12:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  1 09:12:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  1 09:12:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  1 09:12:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  1 09:12:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  1 09:12:56 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  1 09:12:56 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct  1 09:12:56 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:12:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:12:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  1 09:12:57 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 103 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=103 pruub=15.010645866s) [2] r=-1 lpr=103 pi=[80,103)/1 crt=70'389 mlcod 0'0 active pruub 196.261871338s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:57 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 103 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=103 pruub=15.010584831s) [2] r=-1 lpr=103 pi=[80,103)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 196.261871338s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:57 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 103 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=103) [2] r=0 lpr=103 pi=[80,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  1 09:12:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  1 09:12:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  1 09:12:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 104 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[80,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:58 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 104 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[80,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:12:58 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 104 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=0 lpr=104 pi=[80,104)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:12:58 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 104 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=0 lpr=104 pi=[80,104)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:12:59 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  1 09:12:59 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  1 09:12:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  1 09:12:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  1 09:12:59 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  1 09:12:59 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  1 09:12:59 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 105 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=104/105 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[80,104)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:12:59 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  1 09:12:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:12:59 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  1 09:12:59 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  1 09:13:00 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Oct  1 09:13:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  1 09:13:00 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Oct  1 09:13:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  1 09:13:00 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=104/105 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106 pruub=15.436095238s) [2] async=[2] r=-1 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 70'389 active pruub 199.695755005s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  1 09:13:00 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=104/105 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106 pruub=15.435786247s) [2] r=-1 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 199.695755005s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:00 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106) [2] r=0 lpr=106 pi=[80,106)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:00 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106) [2] r=0 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:00 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Oct  1 09:13:00 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Oct  1 09:13:01 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  1 09:13:01 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  1 09:13:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  1 09:13:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  1 09:13:01 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  1 09:13:01 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 107 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=106/107 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106) [2] r=0 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  1 09:13:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  1 09:13:03 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  1 09:13:03 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  1 09:13:03 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct  1 09:13:03 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct  1 09:13:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 179 B/s wr, 6 op/s; 38 B/s, 1 objects/s recovering
Oct  1 09:13:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct  1 09:13:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  1 09:13:04 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  1 09:13:04 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  1 09:13:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct  1 09:13:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  1 09:13:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  1 09:13:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct  1 09:13:04 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct  1 09:13:04 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  1 09:13:04 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  1 09:13:05 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  1 09:13:05 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  1 09:13:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  1 09:13:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct  1 09:13:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct  1 09:13:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  1 09:13:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct  1 09:13:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  1 09:13:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  1 09:13:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct  1 09:13:06 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct  1 09:13:06 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 109 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=13.606899261s) [1] r=-1 lpr=109 pi=[80,109)/1 crt=70'389 mlcod 0'0 active pruub 204.255828857s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:06 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 109 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=13.606764793s) [1] r=-1 lpr=109 pi=[80,109)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 204.255828857s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:06 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 109 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [1] r=0 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct  1 09:13:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  1 09:13:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct  1 09:13:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct  1 09:13:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:07 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 110 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=0 lpr=110 pi=[80,110)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:07 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 110 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=0 lpr=110 pi=[80,110)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:07 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct  1 09:13:07 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct  1 09:13:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct  1 09:13:08 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  1 09:13:08 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  1 09:13:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct  1 09:13:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct  1 09:13:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct  1 09:13:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct  1 09:13:08 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct  1 09:13:08 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 111 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=110/111 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] async=[1] r=0 lpr=110 pi=[80,110)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:09 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct  1 09:13:09 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct  1 09:13:09 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct  1 09:13:09 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct  1 09:13:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct  1 09:13:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct  1 09:13:09 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct  1 09:13:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112) [1] r=0 lpr=112 pi=[80,112)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:09 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112) [1] r=0 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=110/111 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.016909599s) [1] async=[1] r=-1 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 70'389 active pruub 208.746765137s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:09 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=110/111 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.016639709s) [1] r=-1 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 208.746765137s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:10 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  1 09:13:10 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  1 09:13:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct  1 09:13:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct  1 09:13:10 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct  1 09:13:10 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 113 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=112/113 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112) [1] r=0 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:11 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  1 09:13:11 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  1 09:13:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:11 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  1 09:13:11 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  1 09:13:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct  1 09:13:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  1 09:13:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 09:13:13 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct  1 09:13:13 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct  1 09:13:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct  1 09:13:14 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  1 09:13:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 09:13:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct  1 09:13:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct  1 09:13:14 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 114 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=15.651785851s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=70'389 mlcod 0'0 active pruub 204.694458008s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:14 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 114 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=15.651704788s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 204.694458008s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:14 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=114) [0] r=0 lpr=114 pi=[89,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:14 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct  1 09:13:14 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct  1 09:13:15 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 115 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:15 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 115 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:15 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 115 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:15 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 115 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  1 09:13:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct  1 09:13:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  1 09:13:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct  1 09:13:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  1 09:13:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct  1 09:13:16 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct  1 09:13:16 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  1 09:13:16 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  1 09:13:16 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  1 09:13:16 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  1 09:13:16 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 116 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=115/116 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[89,115)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:17 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct  1 09:13:17 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct  1 09:13:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct  1 09:13:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct  1 09:13:17 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct  1 09:13:17 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:17 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:17 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=115/116 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.109450340s) [0] async=[0] r=-1 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 70'389 active pruub 206.975601196s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:17 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=115/116 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.109361649s) [0] r=-1 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 206.975601196s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  1 09:13:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct  1 09:13:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:13:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct  1 09:13:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  1 09:13:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct  1 09:13:18 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct  1 09:13:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  1 09:13:18 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 118 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=117/118 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:18 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct  1 09:13:18 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct  1 09:13:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  1 09:13:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Oct  1 09:13:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct  1 09:13:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  1 09:13:19 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct  1 09:13:19 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct  1 09:13:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct  1 09:13:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  1 09:13:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  1 09:13:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct  1 09:13:20 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct  1 09:13:20 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct  1 09:13:20 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct  1 09:13:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  1 09:13:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Oct  1 09:13:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct  1 09:13:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  1 09:13:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Oct  1 09:13:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Oct  1 09:13:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct  1 09:13:22 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  1 09:13:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  1 09:13:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct  1 09:13:22 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  1 09:13:22 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  1 09:13:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct  1 09:13:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 119 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=119 pruub=13.049392700s) [2] r=-1 lpr=119 pi=[80,119)/1 crt=70'389 mlcod 0'0 active pruub 220.262161255s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 120 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=119 pruub=13.049277306s) [2] r=-1 lpr=119 pi=[80,119)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 220.262161255s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=119) [2] r=0 lpr=120 pi=[80,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct  1 09:13:23 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct  1 09:13:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 121 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=-1 lpr=121 pi=[80,121)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:23 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 121 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=-1 lpr=121 pi=[80,121)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 121 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=0 lpr=121 pi=[80,121)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:23 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 121 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=0 lpr=121 pi=[80,121)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:24 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct  1 09:13:24 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct  1 09:13:24 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  1 09:13:24 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  1 09:13:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct  1 09:13:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  1 09:13:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct  1 09:13:24 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct  1 09:13:24 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 122 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=121/122 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] async=[2] r=0 lpr=121 pi=[80,121)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:13:25 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 133d785c-70a1-4e52-b8b6-c6d9dc4bf703 does not exist
Oct  1 09:13:25 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2ce3e324-bc9b-4970-844a-698f8e615679 does not exist
Oct  1 09:13:25 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e8c75221-df56-4f6c-9fbe-352332bcfad6 does not exist
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct  1 09:13:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123) [2] r=0 lpr=123 pi=[80,123)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:25 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123) [2] r=0 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=121/122 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123 pruub=15.423336029s) [2] async=[2] r=-1 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 70'389 active pruub 225.076110840s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:25 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=121/122 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123 pruub=15.423220634s) [2] r=-1 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 225.076110840s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:25 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  1 09:13:25 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  1 09:13:25 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct  1 09:13:25 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct  1 09:13:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:13:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.033982346 +0000 UTC m=+0.062652162 container create 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:13:26 np0005464214 systemd[1]: Started libpod-conmon-47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7.scope.
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.007201302 +0000 UTC m=+0.035871198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:13:26 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.136535892 +0000 UTC m=+0.165205748 container init 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.15043746 +0000 UTC m=+0.179107276 container start 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.153926222 +0000 UTC m=+0.182596118 container attach 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:13:26 np0005464214 pensive_tharp[107936]: 167 167
Oct  1 09:13:26 np0005464214 systemd[1]: libpod-47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7.scope: Deactivated successfully.
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.160040019 +0000 UTC m=+0.188709835 container died 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:13:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5608b8270fb9d29d155b716a070243cc772e5a7bfe49108dc6388bf0accfe884-merged.mount: Deactivated successfully.
Oct  1 09:13:26 np0005464214 podman[107920]: 2025-10-01 13:13:26.205329279 +0000 UTC m=+0.233999095 container remove 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:13:26 np0005464214 systemd[1]: libpod-conmon-47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7.scope: Deactivated successfully.
Oct  1 09:13:26 np0005464214 podman[107960]: 2025-10-01 13:13:26.404134889 +0000 UTC m=+0.059122878 container create a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:13:26 np0005464214 systemd[1]: Started libpod-conmon-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope.
Oct  1 09:13:26 np0005464214 podman[107960]: 2025-10-01 13:13:26.375081602 +0000 UTC m=+0.030069651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:13:26 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:13:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct  1 09:13:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  1 09:13:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct  1 09:13:26 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct  1 09:13:26 np0005464214 podman[107960]: 2025-10-01 13:13:26.519694735 +0000 UTC m=+0.174682764 container init a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:13:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 124 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=124 pruub=12.433906555s) [0] r=-1 lpr=124 pi=[97,124)/1 crt=70'389 mlcod 0'0 active pruub 213.158920288s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 124 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=124 pruub=12.432840347s) [0] r=-1 lpr=124 pi=[97,124)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 213.158920288s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:26 np0005464214 podman[107960]: 2025-10-01 13:13:26.530611516 +0000 UTC m=+0.185599505 container start a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:13:26 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 124 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=124) [0] r=0 lpr=124 pi=[97,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:26 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct  1 09:13:26 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 124 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=123/124 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123) [2] r=0 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:26 np0005464214 podman[107960]: 2025-10-01 13:13:26.538387187 +0000 UTC m=+0.193375156 container attach a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:13:26 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct  1 09:13:26 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct  1 09:13:26 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct  1 09:13:26 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  1 09:13:26 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  1 09:13:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  1 09:13:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct  1 09:13:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct  1 09:13:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct  1 09:13:27 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 125 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[97,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:27 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 125 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[97,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 125 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=0 lpr=125 pi=[97,125)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:27 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 125 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=0 lpr=125 pi=[97,125)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:27 np0005464214 compassionate_darwin[107977]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:13:27 np0005464214 compassionate_darwin[107977]: --> relative data size: 1.0
Oct  1 09:13:27 np0005464214 compassionate_darwin[107977]: --> All data devices are unavailable
Oct  1 09:13:27 np0005464214 systemd[1]: libpod-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope: Deactivated successfully.
Oct  1 09:13:27 np0005464214 systemd[1]: libpod-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope: Consumed 1.107s CPU time.
Oct  1 09:13:27 np0005464214 podman[107960]: 2025-10-01 13:13:27.684586551 +0000 UTC m=+1.339574500 container died a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:13:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa-merged.mount: Deactivated successfully.
Oct  1 09:13:27 np0005464214 podman[107960]: 2025-10-01 13:13:27.755679613 +0000 UTC m=+1.410667572 container remove a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:13:27 np0005464214 systemd[1]: libpod-conmon-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope: Deactivated successfully.
Oct  1 09:13:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct  1 09:13:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.478405733 +0000 UTC m=+0.053969850 container create 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:13:28 np0005464214 systemd[1]: Started libpod-conmon-14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471.scope.
Oct  1 09:13:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct  1 09:13:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.44792894 +0000 UTC m=+0.023493027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:13:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  1 09:13:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct  1 09:13:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:13:28 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.572096954 +0000 UTC m=+0.147661061 container init 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.583705798 +0000 UTC m=+0.159269875 container start 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.587098958 +0000 UTC m=+0.162663035 container attach 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:13:28 np0005464214 modest_panini[108179]: 167 167
Oct  1 09:13:28 np0005464214 systemd[1]: libpod-14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471.scope: Deactivated successfully.
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.590358072 +0000 UTC m=+0.165922149 container died 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:13:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4cd54492720f394be760e19330d3f368269709556790f9056a2ef958b6ee3a89-merged.mount: Deactivated successfully.
Oct  1 09:13:28 np0005464214 podman[108163]: 2025-10-01 13:13:28.632804981 +0000 UTC m=+0.208369058 container remove 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:13:28 np0005464214 systemd[1]: libpod-conmon-14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471.scope: Deactivated successfully.
Oct  1 09:13:28 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Oct  1 09:13:28 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 126 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=125/126 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] async=[0] r=0 lpr=125 pi=[97,125)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:28 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Oct  1 09:13:28 np0005464214 podman[108202]: 2025-10-01 13:13:28.869826243 +0000 UTC m=+0.082971866 container create c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:13:28 np0005464214 podman[108202]: 2025-10-01 13:13:28.827640613 +0000 UTC m=+0.040786266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:13:28 np0005464214 systemd[1]: Started libpod-conmon-c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1.scope.
Oct  1 09:13:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:13:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:28 np0005464214 podman[108202]: 2025-10-01 13:13:28.980547272 +0000 UTC m=+0.193692955 container init c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  1 09:13:28 np0005464214 podman[108202]: 2025-10-01 13:13:28.987466316 +0000 UTC m=+0.200611959 container start c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:13:28 np0005464214 podman[108202]: 2025-10-01 13:13:28.991402082 +0000 UTC m=+0.204547745 container attach c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:13:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct  1 09:13:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  1 09:13:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct  1 09:13:29 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct  1 09:13:29 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=125/126 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127 pruub=15.120691299s) [0] async=[0] r=-1 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 70'389 active pruub 218.893310547s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:29 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=125/126 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127 pruub=15.120597839s) [0] r=-1 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 218.893310547s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:29 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127) [0] r=0 lpr=127 pi=[97,127)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:29 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127) [0] r=0 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:29 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  1 09:13:29 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]: {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:    "0": [
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:        {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "devices": [
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "/dev/loop3"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            ],
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_name": "ceph_lv0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_size": "21470642176",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "name": "ceph_lv0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "tags": {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cluster_name": "ceph",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.crush_device_class": "",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.encrypted": "0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osd_id": "0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.type": "block",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.vdo": "0"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            },
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "type": "block",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "vg_name": "ceph_vg0"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:        }
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:    ],
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:    "1": [
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:        {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "devices": [
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "/dev/loop4"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            ],
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_name": "ceph_lv1",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_size": "21470642176",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "name": "ceph_lv1",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "tags": {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cluster_name": "ceph",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.crush_device_class": "",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.encrypted": "0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osd_id": "1",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.type": "block",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.vdo": "0"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            },
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "type": "block",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "vg_name": "ceph_vg1"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:        }
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:    ],
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:    "2": [
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:        {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "devices": [
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "/dev/loop5"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            ],
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_name": "ceph_lv2",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_size": "21470642176",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "name": "ceph_lv2",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "tags": {
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.cluster_name": "ceph",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.crush_device_class": "",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.encrypted": "0",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osd_id": "2",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.type": "block",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:                "ceph.vdo": "0"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            },
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "type": "block",
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:            "vg_name": "ceph_vg2"
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:        }
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]:    ]
Oct  1 09:13:29 np0005464214 distracted_northcutt[108219]: }
Oct  1 09:13:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct  1 09:13:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  1 09:13:29 np0005464214 systemd[1]: libpod-c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1.scope: Deactivated successfully.
Oct  1 09:13:29 np0005464214 podman[108228]: 2025-10-01 13:13:29.836460397 +0000 UTC m=+0.026936589 container died c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:13:29 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e-merged.mount: Deactivated successfully.
Oct  1 09:13:29 np0005464214 podman[108228]: 2025-10-01 13:13:29.903719375 +0000 UTC m=+0.094195507 container remove c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:13:29 np0005464214 systemd[1]: libpod-conmon-c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1.scope: Deactivated successfully.
Oct  1 09:13:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.515865721 +0000 UTC m=+0.044588028 container create 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:13:30 np0005464214 systemd[1]: Started libpod-conmon-2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632.scope.
Oct  1 09:13:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct  1 09:13:30 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  1 09:13:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  1 09:13:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct  1 09:13:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.501303992 +0000 UTC m=+0.030026289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:13:30 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 128 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=128 pruub=15.895023346s) [0] r=-1 lpr=128 pi=[89,128)/1 crt=70'389 mlcod 0'0 active pruub 220.694305420s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:30 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 128 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=128 pruub=15.894518852s) [0] r=-1 lpr=128 pi=[89,128)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 220.694305420s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:30 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=128) [0] r=0 lpr=128 pi=[89,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:30 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 128 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=127/128 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127) [0] r=0 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:30 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.636017915 +0000 UTC m=+0.164740312 container init 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.648539798 +0000 UTC m=+0.177262135 container start 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.651854445 +0000 UTC m=+0.180576782 container attach 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:13:30 np0005464214 great_aryabhata[108521]: 167 167
Oct  1 09:13:30 np0005464214 systemd[1]: libpod-2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632.scope: Deactivated successfully.
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.656202505 +0000 UTC m=+0.184924832 container died 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:13:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ff28626ae7851121d9dc53108950e1f9f5659f7f4c29e159dac5a581677d26ef-merged.mount: Deactivated successfully.
Oct  1 09:13:30 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  1 09:13:30 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  1 09:13:30 np0005464214 podman[108476]: 2025-10-01 13:13:30.697911081 +0000 UTC m=+0.226633408 container remove 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:13:30 np0005464214 systemd[1]: libpod-conmon-2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632.scope: Deactivated successfully.
Oct  1 09:13:30 np0005464214 python3.9[108561]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:13:30 np0005464214 podman[108574]: 2025-10-01 13:13:30.921954914 +0000 UTC m=+0.065039318 container create e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:13:30 np0005464214 systemd[1]: Started libpod-conmon-e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970.scope.
Oct  1 09:13:30 np0005464214 podman[108574]: 2025-10-01 13:13:30.892234275 +0000 UTC m=+0.035318739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:13:30 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:13:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:13:31 np0005464214 podman[108574]: 2025-10-01 13:13:31.012988638 +0000 UTC m=+0.156073062 container init e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:13:31 np0005464214 podman[108574]: 2025-10-01 13:13:31.025935345 +0000 UTC m=+0.169019729 container start e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 09:13:31 np0005464214 podman[108574]: 2025-10-01 13:13:31.028993874 +0000 UTC m=+0.172078258 container attach e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:13:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct  1 09:13:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct  1 09:13:31 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct  1 09:13:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  1 09:13:31 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[89,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:31 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[89,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:31 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 129 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=0 lpr=129 pi=[89,129)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:31 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 129 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=0 lpr=129 pi=[89,129)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  1 09:13:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:13:31 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  1 09:13:31 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  1 09:13:32 np0005464214 sweet_allen[108591]: {
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "osd_id": 0,
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "type": "bluestore"
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:    },
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "osd_id": 2,
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "type": "bluestore"
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:    },
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "osd_id": 1,
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:        "type": "bluestore"
Oct  1 09:13:32 np0005464214 sweet_allen[108591]:    }
Oct  1 09:13:32 np0005464214 sweet_allen[108591]: }
Oct  1 09:13:32 np0005464214 systemd[1]: libpod-e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970.scope: Deactivated successfully.
Oct  1 09:13:32 np0005464214 podman[108574]: 2025-10-01 13:13:32.034998308 +0000 UTC m=+1.178082692 container died e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:13:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35-merged.mount: Deactivated successfully.
Oct  1 09:13:32 np0005464214 podman[108574]: 2025-10-01 13:13:32.093534395 +0000 UTC m=+1.236618779 container remove e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct  1 09:13:32 np0005464214 systemd[1]: libpod-conmon-e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970.scope: Deactivated successfully.
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:13:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 401be56a-6046-4c5b-a34c-e317fa9245ed does not exist
Oct  1 09:13:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2945ce16-a488-4427-a4ff-ceed9baa97c6 does not exist
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct  1 09:13:32 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct  1 09:13:32 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 130 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=130 pruub=13.869067192s) [1] r=-1 lpr=130 pi=[89,130)/1 crt=70'389 mlcod 0'0 active pruub 220.694320679s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:32 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 130 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=130 pruub=13.868596077s) [1] r=-1 lpr=130 pi=[89,130)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 220.694320679s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:32 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 130 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1] r=0 lpr=130 pi=[89,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:32 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 130 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=129/130 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] async=[0] r=0 lpr=129 pi=[89,129)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:32 np0005464214 python3.9[108970]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  1 09:13:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct  1 09:13:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  1 09:13:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct  1 09:13:33 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct  1 09:13:33 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131) [0] r=0 lpr=131 pi=[89,131)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:33 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131) [0] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:33 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=129/130 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131 pruub=14.994687080s) [0] async=[0] r=-1 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 70'389 active pruub 222.833190918s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:33 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=129/130 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131 pruub=14.994582176s) [0] r=-1 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 222.833190918s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:33 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:33 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:33 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=-1 lpr=131 pi=[89,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:33 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=-1 lpr=131 pi=[89,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:33 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  1 09:13:33 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  1 09:13:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 09:13:33 np0005464214 python3.9[109122]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  1 09:13:34 np0005464214 python3.9[109274]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:13:34 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct  1 09:13:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct  1 09:13:34 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct  1 09:13:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct  1 09:13:34 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct  1 09:13:34 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 132 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] async=[1] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:34 np0005464214 ceph-osd[88455]: osd.0 pg_epoch: 132 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131) [0] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct  1 09:13:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct  1 09:13:35 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct  1 09:13:35 np0005464214 python3.9[109426]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  1 09:13:35 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133 pruub=15.146118164s) [1] async=[1] r=-1 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 70'389 active pruub 224.864700317s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:35 np0005464214 ceph-osd[90500]: osd.2 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133 pruub=15.146004677s) [1] r=-1 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 224.864700317s@ mbc={}] state<Start>: transitioning to Stray
Oct  1 09:13:35 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133) [1] r=0 lpr=133 pi=[89,133)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  1 09:13:35 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133) [1] r=0 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  1 09:13:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  1 09:13:35 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct  1 09:13:35 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct  1 09:13:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  1 09:13:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  1 09:13:36 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  1 09:13:36 np0005464214 ceph-osd[89484]: osd.1 pg_epoch: 134 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=133/134 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133) [1] r=0 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  1 09:13:36 np0005464214 python3.9[109578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:13:37 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct  1 09:13:37 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct  1 09:13:37 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct  1 09:13:37 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct  1 09:13:37 np0005464214 python3.9[109730]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:13:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 2 objects/s recovering
Oct  1 09:13:38 np0005464214 python3.9[109810]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:13:38 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct  1 09:13:38 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct  1 09:13:39 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  1 09:13:39 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  1 09:13:39 np0005464214 python3.9[109962]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  1 09:13:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  1 09:13:40 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  1 09:13:40 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  1 09:13:40 np0005464214 python3.9[110117]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  1 09:13:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:41 np0005464214 python3.9[110270]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 09:13:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct  1 09:13:42 np0005464214 python3.9[110422]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  1 09:13:43 np0005464214 python3.9[110574]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:13:43 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  1 09:13:43 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  1 09:13:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 09:13:45 np0005464214 python3.9[110727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:13:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:45 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  1 09:13:45 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  1 09:13:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 09:13:45 np0005464214 python3.9[110879]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:13:46 np0005464214 python3.9[110957]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:13:46 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  1 09:13:46 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  1 09:13:47 np0005464214 python3.9[111109]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:13:47
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control']
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:13:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:13:47 np0005464214 python3.9[111187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:13:48 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  1 09:13:48 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  1 09:13:48 np0005464214 python3.9[111339]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:13:49 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  1 09:13:49 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  1 09:13:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:50 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  1 09:13:50 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  1 09:13:50 np0005464214 python3.9[111490]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:13:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:51 np0005464214 python3.9[111642]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  1 09:13:52 np0005464214 python3.9[111792]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:13:52 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct  1 09:13:52 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct  1 09:13:53 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  1 09:13:53 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  1 09:13:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:54 np0005464214 python3.9[111944]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:13:54 np0005464214 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  1 09:13:54 np0005464214 systemd[1]: tuned.service: Deactivated successfully.
Oct  1 09:13:54 np0005464214 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  1 09:13:54 np0005464214 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  1 09:13:54 np0005464214 systemd[76436]: Created slice User Background Tasks Slice.
Oct  1 09:13:54 np0005464214 systemd[76436]: Starting Cleanup of User's Temporary Files and Directories...
Oct  1 09:13:54 np0005464214 systemd[76436]: Finished Cleanup of User's Temporary Files and Directories.
Oct  1 09:13:54 np0005464214 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  1 09:13:54 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  1 09:13:54 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  1 09:13:55 np0005464214 python3.9[112106]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  1 09:13:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:13:55 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  1 09:13:55 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  1 09:13:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:55 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  1 09:13:55 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  1 09:13:55 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct  1 09:13:55 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct  1 09:13:56 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct  1 09:13:56 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:13:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:13:57 np0005464214 python3.9[112258]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:13:57 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct  1 09:13:57 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct  1 09:13:57 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct  1 09:13:57 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct  1 09:13:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:58 np0005464214 python3.9[112412]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:13:58 np0005464214 systemd[1]: session-35.scope: Deactivated successfully.
Oct  1 09:13:58 np0005464214 systemd[1]: session-35.scope: Consumed 1min 4.844s CPU time.
Oct  1 09:13:58 np0005464214 systemd-logind[818]: Session 35 logged out. Waiting for processes to exit.
Oct  1 09:13:58 np0005464214 systemd-logind[818]: Removed session 35.
Oct  1 09:13:58 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Oct  1 09:13:58 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Oct  1 09:13:59 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Oct  1 09:13:59 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Oct  1 09:13:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:13:59 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct  1 09:13:59 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct  1 09:14:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:00 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct  1 09:14:00 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct  1 09:14:01 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct  1 09:14:01 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct  1 09:14:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:01 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  1 09:14:01 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  1 09:14:02 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  1 09:14:02 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  1 09:14:03 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  1 09:14:03 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  1 09:14:03 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct  1 09:14:03 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct  1 09:14:03 np0005464214 systemd-logind[818]: New session 36 of user zuul.
Oct  1 09:14:03 np0005464214 systemd[1]: Started Session 36 of User zuul.
Oct  1 09:14:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:04 np0005464214 python3.9[112594]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:14:04 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  1 09:14:04 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  1 09:14:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:06 np0005464214 python3.9[112752]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  1 09:14:06 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  1 09:14:06 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  1 09:14:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  1 09:14:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  1 09:14:07 np0005464214 python3.9[112905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:14:07 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  1 09:14:07 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  1 09:14:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:08 np0005464214 python3.9[112989]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 09:14:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  1 09:14:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  1 09:14:08 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct  1 09:14:08 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct  1 09:14:09 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  1 09:14:09 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  1 09:14:09 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct  1 09:14:09 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct  1 09:14:09 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  1 09:14:09 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  1 09:14:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:10 np0005464214 python3.9[113142]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:14:10 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  1 09:14:10 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  1 09:14:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:10 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct  1 09:14:10 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct  1 09:14:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct  1 09:14:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct  1 09:14:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:12 np0005464214 python3.9[113295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:14:13 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct  1 09:14:13 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct  1 09:14:13 np0005464214 python3.9[113448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:14:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:14 np0005464214 python3.9[113600]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  1 09:14:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:15 np0005464214 python3.9[113750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:14:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:16 np0005464214 python3.9[113908]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:14:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:18 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  1 09:14:18 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  1 09:14:18 np0005464214 python3.9[114061]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:14:19 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Oct  1 09:14:19 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Oct  1 09:14:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:20 np0005464214 python3.9[114348]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 09:14:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:20 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct  1 09:14:20 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct  1 09:14:20 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  1 09:14:20 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  1 09:14:21 np0005464214 python3.9[114498]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:14:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  1 09:14:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  1 09:14:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:22 np0005464214 python3.9[114652]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:14:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:24 np0005464214 python3.9[114805]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:14:24 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct  1 09:14:24 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct  1 09:14:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:26 np0005464214 python3.9[114958]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:14:27 np0005464214 python3.9[115112]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  1 09:14:27 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  1 09:14:27 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  1 09:14:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:28 np0005464214 systemd[1]: session-36.scope: Deactivated successfully.
Oct  1 09:14:28 np0005464214 systemd[1]: session-36.scope: Consumed 18.327s CPU time.
Oct  1 09:14:28 np0005464214 systemd-logind[818]: Session 36 logged out. Waiting for processes to exit.
Oct  1 09:14:28 np0005464214 systemd-logind[818]: Removed session 36.
Oct  1 09:14:28 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct  1 09:14:28 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct  1 09:14:29 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  1 09:14:29 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  1 09:14:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:30 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  1 09:14:30 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  1 09:14:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:32 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct  1 09:14:32 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:14:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8aa69581-f101-4483-aee0-e8fbea0d07da does not exist
Oct  1 09:14:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 988c2010-35fd-430e-ac0b-3af7c987b839 does not exist
Oct  1 09:14:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 597060b1-5161-4bdb-8fa2-5cd01aaa083d does not exist
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:14:33 np0005464214 systemd-logind[818]: New session 37 of user zuul.
Oct  1 09:14:33 np0005464214 systemd[1]: Started Session 37 of User zuul.
Oct  1 09:14:33 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct  1 09:14:33 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.66640896 +0000 UTC m=+0.035956833 container create 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 09:14:33 np0005464214 systemd[1]: Started libpod-conmon-0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947.scope.
Oct  1 09:14:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.745928885 +0000 UTC m=+0.115476808 container init 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.649922616 +0000 UTC m=+0.019470480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.757462101 +0000 UTC m=+0.127009964 container start 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.76246904 +0000 UTC m=+0.132016883 container attach 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:14:33 np0005464214 admiring_agnesi[115481]: 167 167
Oct  1 09:14:33 np0005464214 systemd[1]: libpod-0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947.scope: Deactivated successfully.
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.765166886 +0000 UTC m=+0.134714749 container died 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:14:33 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bbf95c5d159267cef43c843a0a1856ee26daeb336e7b7de6abedd35694fb8bbb-merged.mount: Deactivated successfully.
Oct  1 09:14:33 np0005464214 podman[115465]: 2025-10-01 13:14:33.816467885 +0000 UTC m=+0.186015728 container remove 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:14:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:33 np0005464214 systemd[1]: libpod-conmon-0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947.scope: Deactivated successfully.
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:14:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:14:33 np0005464214 podman[115553]: 2025-10-01 13:14:33.959555838 +0000 UTC m=+0.039190835 container create 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:14:33 np0005464214 systemd[1]: Started libpod-conmon-88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36.scope.
Oct  1 09:14:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:14:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:34 np0005464214 podman[115553]: 2025-10-01 13:14:33.942771705 +0000 UTC m=+0.022406742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:14:34 np0005464214 podman[115553]: 2025-10-01 13:14:34.043656119 +0000 UTC m=+0.123291146 container init 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:14:34 np0005464214 podman[115553]: 2025-10-01 13:14:34.049782432 +0000 UTC m=+0.129417439 container start 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:14:34 np0005464214 podman[115553]: 2025-10-01 13:14:34.053829801 +0000 UTC m=+0.133464858 container attach 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:14:34 np0005464214 python3.9[115622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:14:34 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct  1 09:14:34 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct  1 09:14:35 np0005464214 adoring_gauss[115617]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:14:35 np0005464214 adoring_gauss[115617]: --> relative data size: 1.0
Oct  1 09:14:35 np0005464214 adoring_gauss[115617]: --> All data devices are unavailable
Oct  1 09:14:35 np0005464214 systemd[1]: libpod-88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36.scope: Deactivated successfully.
Oct  1 09:14:35 np0005464214 podman[115553]: 2025-10-01 13:14:35.040904172 +0000 UTC m=+1.120539249 container died 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:14:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62-merged.mount: Deactivated successfully.
Oct  1 09:14:35 np0005464214 podman[115553]: 2025-10-01 13:14:35.100931218 +0000 UTC m=+1.180566225 container remove 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:14:35 np0005464214 systemd[1]: libpod-conmon-88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36.scope: Deactivated successfully.
Oct  1 09:14:35 np0005464214 python3.9[115794]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:14:35 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  1 09:14:35 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  1 09:14:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.665135892 +0000 UTC m=+0.039150634 container create 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:14:35 np0005464214 systemd[1]: Started libpod-conmon-59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32.scope.
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.649684892 +0000 UTC m=+0.023699634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:14:35 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.765381155 +0000 UTC m=+0.139395897 container init 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.771654144 +0000 UTC m=+0.145668866 container start 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.774444882 +0000 UTC m=+0.148459604 container attach 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:14:35 np0005464214 nervous_raman[116069]: 167 167
Oct  1 09:14:35 np0005464214 systemd[1]: libpod-59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32.scope: Deactivated successfully.
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.776953653 +0000 UTC m=+0.150968395 container died 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:14:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8ef5d0798d43233df98a4c6a9c1b9d2b444a72fd21c540f90a264ba178719af9-merged.mount: Deactivated successfully.
Oct  1 09:14:35 np0005464214 podman[116022]: 2025-10-01 13:14:35.810719914 +0000 UTC m=+0.184734646 container remove 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct  1 09:14:35 np0005464214 systemd[1]: libpod-conmon-59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32.scope: Deactivated successfully.
Oct  1 09:14:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:35 np0005464214 podman[116113]: 2025-10-01 13:14:35.962211424 +0000 UTC m=+0.039163784 container create ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:14:35 np0005464214 systemd[1]: Started libpod-conmon-ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb.scope.
Oct  1 09:14:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:14:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:36 np0005464214 podman[116113]: 2025-10-01 13:14:35.947688584 +0000 UTC m=+0.024640964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:14:36 np0005464214 podman[116113]: 2025-10-01 13:14:36.044801037 +0000 UTC m=+0.121753417 container init ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:14:36 np0005464214 podman[116113]: 2025-10-01 13:14:36.053923897 +0000 UTC m=+0.130876257 container start ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:14:36 np0005464214 podman[116113]: 2025-10-01 13:14:36.05873762 +0000 UTC m=+0.135689980 container attach ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:14:36 np0005464214 python3.9[116208]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:14:36 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  1 09:14:36 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  1 09:14:36 np0005464214 systemd[1]: session-37.scope: Deactivated successfully.
Oct  1 09:14:36 np0005464214 systemd[1]: session-37.scope: Consumed 2.146s CPU time.
Oct  1 09:14:36 np0005464214 systemd-logind[818]: Session 37 logged out. Waiting for processes to exit.
Oct  1 09:14:36 np0005464214 systemd-logind[818]: Removed session 37.
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]: {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:    "0": [
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:        {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "devices": [
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "/dev/loop3"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            ],
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_name": "ceph_lv0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_size": "21470642176",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "name": "ceph_lv0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "tags": {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cluster_name": "ceph",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.crush_device_class": "",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.encrypted": "0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osd_id": "0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.type": "block",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.vdo": "0"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            },
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "type": "block",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "vg_name": "ceph_vg0"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:        }
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:    ],
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:    "1": [
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:        {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "devices": [
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "/dev/loop4"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            ],
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_name": "ceph_lv1",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_size": "21470642176",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "name": "ceph_lv1",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "tags": {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cluster_name": "ceph",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.crush_device_class": "",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.encrypted": "0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osd_id": "1",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.type": "block",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.vdo": "0"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            },
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "type": "block",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "vg_name": "ceph_vg1"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:        }
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:    ],
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:    "2": [
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:        {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "devices": [
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "/dev/loop5"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            ],
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_name": "ceph_lv2",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_size": "21470642176",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "name": "ceph_lv2",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "tags": {
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.cluster_name": "ceph",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.crush_device_class": "",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.encrypted": "0",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osd_id": "2",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.type": "block",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:                "ceph.vdo": "0"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            },
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "type": "block",
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:            "vg_name": "ceph_vg2"
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:        }
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]:    ]
Oct  1 09:14:36 np0005464214 hungry_montalcini[116158]: }
Oct  1 09:14:36 np0005464214 systemd[1]: libpod-ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb.scope: Deactivated successfully.
Oct  1 09:14:36 np0005464214 podman[116113]: 2025-10-01 13:14:36.81524364 +0000 UTC m=+0.892196000 container died ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:14:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e-merged.mount: Deactivated successfully.
Oct  1 09:14:36 np0005464214 podman[116113]: 2025-10-01 13:14:36.863641836 +0000 UTC m=+0.940594196 container remove ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:14:36 np0005464214 systemd[1]: libpod-conmon-ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb.scope: Deactivated successfully.
Oct  1 09:14:37 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.464936439 +0000 UTC m=+0.044797744 container create 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:14:37 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  1 09:14:37 np0005464214 systemd[1]: Started libpod-conmon-5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12.scope.
Oct  1 09:14:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.537470161 +0000 UTC m=+0.117331456 container init 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.445827721 +0000 UTC m=+0.025689046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.544725261 +0000 UTC m=+0.124586556 container start 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.54752712 +0000 UTC m=+0.127388415 container attach 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:14:37 np0005464214 loving_driscoll[116408]: 167 167
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.550468014 +0000 UTC m=+0.130329319 container died 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:14:37 np0005464214 systemd[1]: libpod-5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12.scope: Deactivated successfully.
Oct  1 09:14:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0df969a32de8bca6f0b78f5d2c9440c21588008b987a4e8931d053b108445618-merged.mount: Deactivated successfully.
Oct  1 09:14:37 np0005464214 podman[116392]: 2025-10-01 13:14:37.588722918 +0000 UTC m=+0.168584213 container remove 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:14:37 np0005464214 systemd[1]: libpod-conmon-5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12.scope: Deactivated successfully.
Oct  1 09:14:37 np0005464214 podman[116431]: 2025-10-01 13:14:37.740755126 +0000 UTC m=+0.041473758 container create 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:14:37 np0005464214 systemd[1]: Started libpod-conmon-9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699.scope.
Oct  1 09:14:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:14:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:14:37 np0005464214 podman[116431]: 2025-10-01 13:14:37.725157201 +0000 UTC m=+0.025875853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:14:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:37 np0005464214 podman[116431]: 2025-10-01 13:14:37.827022965 +0000 UTC m=+0.127741607 container init 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:14:37 np0005464214 podman[116431]: 2025-10-01 13:14:37.839243123 +0000 UTC m=+0.139961755 container start 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:14:37 np0005464214 podman[116431]: 2025-10-01 13:14:37.843438006 +0000 UTC m=+0.144156638 container attach 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:14:38 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct  1 09:14:38 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct  1 09:14:38 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct  1 09:14:38 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]: {
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "osd_id": 0,
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "type": "bluestore"
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:    },
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "osd_id": 2,
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "type": "bluestore"
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:    },
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "osd_id": 1,
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:        "type": "bluestore"
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]:    }
Oct  1 09:14:38 np0005464214 suspicious_turing[116448]: }
Oct  1 09:14:38 np0005464214 systemd[1]: libpod-9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699.scope: Deactivated successfully.
Oct  1 09:14:38 np0005464214 podman[116431]: 2025-10-01 13:14:38.803308283 +0000 UTC m=+1.104026935 container died 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:14:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f-merged.mount: Deactivated successfully.
Oct  1 09:14:38 np0005464214 podman[116431]: 2025-10-01 13:14:38.862981267 +0000 UTC m=+1.163699899 container remove 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:14:38 np0005464214 systemd[1]: libpod-conmon-9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699.scope: Deactivated successfully.
Oct  1 09:14:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:14:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:14:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:14:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:14:38 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0f1c482c-fbd5-4335-b52e-d599b19cf618 does not exist
Oct  1 09:14:38 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fcce11b1-452a-4827-97ce-f1d06d3c5db5 does not exist
Oct  1 09:14:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:39 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:14:39 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:14:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:40 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  1 09:14:40 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  1 09:14:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:42 np0005464214 systemd-logind[818]: New session 38 of user zuul.
Oct  1 09:14:42 np0005464214 systemd[1]: Started Session 38 of User zuul.
Oct  1 09:14:42 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  1 09:14:42 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  1 09:14:43 np0005464214 python3.9[116699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:14:43 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct  1 09:14:43 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct  1 09:14:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:44 np0005464214 python3.9[116853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:14:44 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  1 09:14:44 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  1 09:14:44 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct  1 09:14:44 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct  1 09:14:44 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct  1 09:14:44 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct  1 09:14:45 np0005464214 python3.9[117009]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:14:45 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  1 09:14:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:45 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  1 09:14:45 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct  1 09:14:45 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct  1 09:14:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:46 np0005464214 python3.9[117093]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:14:47 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct  1 09:14:47 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:14:47
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root']
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:14:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:14:48 np0005464214 python3.9[117246]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:14:48 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  1 09:14:48 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct  1 09:14:48 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  1 09:14:48 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct  1 09:14:49 np0005464214 python3.9[117441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:14:49 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.3 deep-scrub starts
Oct  1 09:14:49 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.3 deep-scrub ok
Oct  1 09:14:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:50 np0005464214 python3.9[117593]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:14:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:50 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct  1 09:14:50 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct  1 09:14:50 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct  1 09:14:50 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct  1 09:14:51 np0005464214 python3.9[117758]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:14:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:52 np0005464214 python3.9[117836]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:14:52 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct  1 09:14:52 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct  1 09:14:52 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct  1 09:14:52 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct  1 09:14:52 np0005464214 python3.9[117988]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:14:53 np0005464214 python3.9[118066]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:14:53 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct  1 09:14:53 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct  1 09:14:53 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  1 09:14:53 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  1 09:14:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:54 np0005464214 python3.9[118220]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:14:55 np0005464214 python3.9[118372]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:14:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:14:55 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Oct  1 09:14:55 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Oct  1 09:14:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:55 np0005464214 python3.9[118524]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:14:56 np0005464214 python3.9[118676]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:14:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:14:57 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct  1 09:14:57 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct  1 09:14:57 np0005464214 python3.9[118828]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:14:57 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct  1 09:14:57 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct  1 09:14:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:14:59 np0005464214 python3.9[118981]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:15:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:00 np0005464214 python3.9[119135]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:15:01 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct  1 09:15:01 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct  1 09:15:01 np0005464214 python3.9[119287]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:15:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.906866) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324501906963, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7304, "num_deletes": 251, "total_data_size": 9345268, "memory_usage": 9589216, "flush_reason": "Manual Compaction"}
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324501962074, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7555255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7442, "table_properties": {"data_size": 7528116, "index_size": 17808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76671, "raw_average_key_size": 23, "raw_value_size": 7464414, "raw_average_value_size": 2265, "num_data_blocks": 780, "num_entries": 3295, "num_filter_entries": 3295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324080, "oldest_key_time": 1759324080, "file_creation_time": 1759324501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 55264 microseconds, and 21248 cpu microseconds.
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.962134) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7555255 bytes OK
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.962157) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.963878) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.963890) EVENT_LOG_v1 {"time_micros": 1759324501963886, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.963916) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9313389, prev total WAL file size 9313389, number of live WAL files 2.
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.965553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7378KB) 13(53KB) 8(1944B)]
Oct  1 09:15:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324501965617, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7612448, "oldest_snapshot_seqno": -1}
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3111 keys, 7567859 bytes, temperature: kUnknown
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502028235, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7567859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7541091, "index_size": 17890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74743, "raw_average_key_size": 24, "raw_value_size": 7478958, "raw_average_value_size": 2404, "num_data_blocks": 784, "num_entries": 3111, "num_filter_entries": 3111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:02.028543) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7567859 bytes
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:02.030523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.2 rd, 120.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3401, records dropped: 290 output_compression: NoCompression
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:02.030539) EVENT_LOG_v1 {"time_micros": 1759324502030531, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62797, "compaction_time_cpu_micros": 15297, "output_level": 6, "num_output_files": 1, "total_output_size": 7567859, "num_input_records": 3401, "num_output_records": 3111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502032011, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502032187, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502032352, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  1 09:15:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.965491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:15:02 np0005464214 python3.9[119440]: ansible-service_facts Invoked
Oct  1 09:15:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Oct  1 09:15:02 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Oct  1 09:15:02 np0005464214 network[119457]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:15:02 np0005464214 network[119458]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:15:02 np0005464214 network[119459]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:15:02 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct  1 09:15:02 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct  1 09:15:03 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Oct  1 09:15:03 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Oct  1 09:15:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:05 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  1 09:15:05 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  1 09:15:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.12 deep-scrub starts
Oct  1 09:15:06 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.12 deep-scrub ok
Oct  1 09:15:07 np0005464214 python3.9[119914]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:15:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct  1 09:15:08 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct  1 09:15:09 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct  1 09:15:09 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct  1 09:15:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:10 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct  1 09:15:10 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct  1 09:15:10 np0005464214 python3.9[120069]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  1 09:15:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct  1 09:15:11 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct  1 09:15:11 np0005464214 python3.9[120221]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:11 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct  1 09:15:11 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct  1 09:15:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:12 np0005464214 python3.9[120299]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:12 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct  1 09:15:12 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct  1 09:15:12 np0005464214 python3.9[120451]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:13 np0005464214 python3.9[120529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:14 np0005464214 python3.9[120681]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:15 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  1 09:15:15 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  1 09:15:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:15 np0005464214 python3.9[120833]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:15:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:16 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct  1 09:15:16 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct  1 09:15:16 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  1 09:15:16 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  1 09:15:16 np0005464214 python3.9[120917]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:15:17 np0005464214 systemd-logind[818]: Session 38 logged out. Waiting for processes to exit.
Oct  1 09:15:17 np0005464214 systemd[1]: session-38.scope: Deactivated successfully.
Oct  1 09:15:17 np0005464214 systemd[1]: session-38.scope: Consumed 24.805s CPU time.
Oct  1 09:15:17 np0005464214 systemd-logind[818]: Removed session 38.
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:15:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:19 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Oct  1 09:15:19 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Oct  1 09:15:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:21 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct  1 09:15:21 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct  1 09:15:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  1 09:15:21 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  1 09:15:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:21 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct  1 09:15:21 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct  1 09:15:22 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct  1 09:15:22 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct  1 09:15:22 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Oct  1 09:15:22 np0005464214 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Oct  1 09:15:22 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  1 09:15:22 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  1 09:15:23 np0005464214 systemd-logind[818]: New session 39 of user zuul.
Oct  1 09:15:23 np0005464214 systemd[1]: Started Session 39 of User zuul.
Oct  1 09:15:23 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct  1 09:15:23 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct  1 09:15:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:24 np0005464214 python3.9[121101]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:25 np0005464214 python3.9[121253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:25 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct  1 09:15:25 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct  1 09:15:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:25 np0005464214 python3.9[121331]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:25 np0005464214 systemd[1]: session-39.scope: Deactivated successfully.
Oct  1 09:15:25 np0005464214 systemd[1]: session-39.scope: Consumed 1.939s CPU time.
Oct  1 09:15:25 np0005464214 systemd-logind[818]: Session 39 logged out. Waiting for processes to exit.
Oct  1 09:15:25 np0005464214 systemd-logind[818]: Removed session 39.
Oct  1 09:15:25 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Oct  1 09:15:25 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Oct  1 09:15:26 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct  1 09:15:26 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct  1 09:15:26 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  1 09:15:27 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  1 09:15:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:30 np0005464214 systemd-logind[818]: New session 40 of user zuul.
Oct  1 09:15:30 np0005464214 systemd[1]: Started Session 40 of User zuul.
Oct  1 09:15:30 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct  1 09:15:30 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct  1 09:15:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:31 np0005464214 python3.9[121510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:15:32 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct  1 09:15:32 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct  1 09:15:33 np0005464214 python3.9[121666]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:33 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  1 09:15:33 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  1 09:15:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:34 np0005464214 python3.9[121841]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:34 np0005464214 python3.9[121919]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.vaqch02l recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:35 np0005464214 python3.9[122071]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:35 np0005464214 python3.9[122149]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._twqt4dg recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:36 np0005464214 python3.9[122301]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:15:37 np0005464214 python3.9[122453]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:37 np0005464214 python3.9[122531]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:15:38 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct  1 09:15:38 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct  1 09:15:38 np0005464214 python3.9[122683]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:39 np0005464214 python3.9[122761]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:15:39 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ebec4e34-b19e-4173-b704-3ed08fb45147 does not exist
Oct  1 09:15:39 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 399d5a93-519c-4075-88f7-a0b430c9659d does not exist
Oct  1 09:15:39 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 87ede350-750d-415b-9980-f9156065a44d does not exist
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:15:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:15:39 np0005464214 python3.9[123029]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:15:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:15:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.354460712 +0000 UTC m=+0.055115617 container create e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:15:40 np0005464214 systemd[1]: Started libpod-conmon-e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c.scope.
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.323833749 +0000 UTC m=+0.024488704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:15:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:15:40 np0005464214 python3.9[123334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.468881801 +0000 UTC m=+0.169536756 container init e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.477676929 +0000 UTC m=+0.178331834 container start e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.483085736 +0000 UTC m=+0.183740641 container attach e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:15:40 np0005464214 zealous_shamir[123353]: 167 167
Oct  1 09:15:40 np0005464214 systemd[1]: libpod-e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c.scope: Deactivated successfully.
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.48994126 +0000 UTC m=+0.190596165 container died e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:15:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ceee5b449e57be8cc8352b13bf0382d2ac8fa6e2436b35f6ecf21edbd82fedf7-merged.mount: Deactivated successfully.
Oct  1 09:15:40 np0005464214 podman[123336]: 2025-10-01 13:15:40.570783089 +0000 UTC m=+0.271437964 container remove e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:15:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:40 np0005464214 systemd[1]: libpod-conmon-e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c.scope: Deactivated successfully.
Oct  1 09:15:40 np0005464214 podman[123426]: 2025-10-01 13:15:40.764821066 +0000 UTC m=+0.054342822 container create e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:15:40 np0005464214 systemd[1]: Started libpod-conmon-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope.
Oct  1 09:15:40 np0005464214 podman[123426]: 2025-10-01 13:15:40.739034361 +0000 UTC m=+0.028556197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:15:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:15:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:40 np0005464214 podman[123426]: 2025-10-01 13:15:40.85650165 +0000 UTC m=+0.146023446 container init e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:15:40 np0005464214 podman[123426]: 2025-10-01 13:15:40.870179098 +0000 UTC m=+0.159700894 container start e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:15:40 np0005464214 podman[123426]: 2025-10-01 13:15:40.874230711 +0000 UTC m=+0.163752497 container attach e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:15:40 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  1 09:15:40 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  1 09:15:41 np0005464214 python3.9[123466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:41 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct  1 09:15:41 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct  1 09:15:41 np0005464214 python3.9[123633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:41 np0005464214 lucid_williams[123469]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:15:41 np0005464214 lucid_williams[123469]: --> relative data size: 1.0
Oct  1 09:15:41 np0005464214 lucid_williams[123469]: --> All data devices are unavailable
Oct  1 09:15:42 np0005464214 systemd[1]: libpod-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope: Deactivated successfully.
Oct  1 09:15:42 np0005464214 systemd[1]: libpod-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope: Consumed 1.064s CPU time.
Oct  1 09:15:42 np0005464214 podman[123426]: 2025-10-01 13:15:42.005080771 +0000 UTC m=+1.294602517 container died e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:15:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79-merged.mount: Deactivated successfully.
Oct  1 09:15:42 np0005464214 podman[123426]: 2025-10-01 13:15:42.067325831 +0000 UTC m=+1.356847587 container remove e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:15:42 np0005464214 systemd[1]: libpod-conmon-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope: Deactivated successfully.
Oct  1 09:15:42 np0005464214 python3.9[123759]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.811680588 +0000 UTC m=+0.051200409 container create 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:15:42 np0005464214 systemd[1]: Started libpod-conmon-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope.
Oct  1 09:15:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.785790059 +0000 UTC m=+0.025309930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.888868026 +0000 UTC m=+0.128387837 container init 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.896201347 +0000 UTC m=+0.135721168 container start 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.899697781 +0000 UTC m=+0.139217592 container attach 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:15:42 np0005464214 systemd[1]: libpod-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope: Deactivated successfully.
Oct  1 09:15:42 np0005464214 elegant_hodgkin[123975]: 167 167
Oct  1 09:15:42 np0005464214 conmon[123975]: conmon 98f222d6fe08b0d953d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope/container/memory.events
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.905074308 +0000 UTC m=+0.144594129 container died 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:15:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1597fefdad16fc197eed0ee43d6eeb74e93b478a331a0388f402c3d8b4fb8ece-merged.mount: Deactivated successfully.
Oct  1 09:15:42 np0005464214 podman[123959]: 2025-10-01 13:15:42.945935126 +0000 UTC m=+0.185454917 container remove 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:15:42 np0005464214 systemd[1]: libpod-conmon-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope: Deactivated successfully.
Oct  1 09:15:43 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct  1 09:15:43 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct  1 09:15:43 np0005464214 podman[124000]: 2025-10-01 13:15:43.103021243 +0000 UTC m=+0.041446949 container create 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:15:43 np0005464214 systemd[1]: Started libpod-conmon-27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd.scope.
Oct  1 09:15:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:15:43 np0005464214 podman[124000]: 2025-10-01 13:15:43.084753284 +0000 UTC m=+0.023179000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:15:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:43 np0005464214 podman[124000]: 2025-10-01 13:15:43.21222189 +0000 UTC m=+0.150647646 container init 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:15:43 np0005464214 podman[124000]: 2025-10-01 13:15:43.22838809 +0000 UTC m=+0.166813796 container start 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:15:43 np0005464214 podman[124000]: 2025-10-01 13:15:43.233130385 +0000 UTC m=+0.171556131 container attach 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:15:43 np0005464214 python3.9[124097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:15:43 np0005464214 systemd[1]: Reloading.
Oct  1 09:15:43 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:15:43 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:15:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]: {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:    "0": [
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:        {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "devices": [
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "/dev/loop3"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            ],
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_name": "ceph_lv0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_size": "21470642176",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "name": "ceph_lv0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "tags": {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cluster_name": "ceph",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.crush_device_class": "",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.encrypted": "0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osd_id": "0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.type": "block",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.vdo": "0"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            },
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "type": "block",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "vg_name": "ceph_vg0"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:        }
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:    ],
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:    "1": [
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:        {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "devices": [
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "/dev/loop4"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            ],
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_name": "ceph_lv1",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_size": "21470642176",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "name": "ceph_lv1",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "tags": {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cluster_name": "ceph",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.crush_device_class": "",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.encrypted": "0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osd_id": "1",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.type": "block",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.vdo": "0"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            },
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "type": "block",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "vg_name": "ceph_vg1"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:        }
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:    ],
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:    "2": [
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:        {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "devices": [
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "/dev/loop5"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            ],
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_name": "ceph_lv2",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_size": "21470642176",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "name": "ceph_lv2",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "tags": {
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.cluster_name": "ceph",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.crush_device_class": "",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.encrypted": "0",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osd_id": "2",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.type": "block",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:                "ceph.vdo": "0"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            },
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "type": "block",
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:            "vg_name": "ceph_vg2"
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:        }
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]:    ]
Oct  1 09:15:44 np0005464214 sleepy_babbage[124058]: }
Oct  1 09:15:44 np0005464214 systemd[1]: libpod-27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd.scope: Deactivated successfully.
Oct  1 09:15:44 np0005464214 podman[124000]: 2025-10-01 13:15:44.04881903 +0000 UTC m=+0.987244726 container died 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:15:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3-merged.mount: Deactivated successfully.
Oct  1 09:15:44 np0005464214 podman[124000]: 2025-10-01 13:15:44.127397204 +0000 UTC m=+1.065822910 container remove 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:15:44 np0005464214 systemd[1]: libpod-conmon-27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd.scope: Deactivated successfully.
Oct  1 09:15:44 np0005464214 python3.9[124378]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.822643103 +0000 UTC m=+0.054833588 container create 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:15:44 np0005464214 systemd[1]: Started libpod-conmon-56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511.scope.
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.795642228 +0000 UTC m=+0.027832753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:15:44 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.913439827 +0000 UTC m=+0.145630352 container init 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.920394895 +0000 UTC m=+0.152585370 container start 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.925769061 +0000 UTC m=+0.157959536 container attach 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:15:44 np0005464214 pensive_mccarthy[124512]: 167 167
Oct  1 09:15:44 np0005464214 systemd[1]: libpod-56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511.scope: Deactivated successfully.
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.928115948 +0000 UTC m=+0.160306453 container died 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:15:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ed95a2ebb8b88619d8a3dcd87c5d5d1424fa327efb97dd7701c02af255555d67-merged.mount: Deactivated successfully.
Oct  1 09:15:44 np0005464214 podman[124471]: 2025-10-01 13:15:44.976042158 +0000 UTC m=+0.208232633 container remove 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:15:45 np0005464214 systemd[1]: libpod-conmon-56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511.scope: Deactivated successfully.
Oct  1 09:15:45 np0005464214 python3.9[124552]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:45 np0005464214 podman[124564]: 2025-10-01 13:15:45.231148707 +0000 UTC m=+0.061680252 container create dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:15:45 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct  1 09:15:45 np0005464214 systemd[1]: Started libpod-conmon-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope.
Oct  1 09:15:45 np0005464214 podman[124564]: 2025-10-01 13:15:45.20804923 +0000 UTC m=+0.038580835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:15:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:15:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:45 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct  1 09:15:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:15:45 np0005464214 podman[124564]: 2025-10-01 13:15:45.322388855 +0000 UTC m=+0.152920470 container init dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:15:45 np0005464214 podman[124564]: 2025-10-01 13:15:45.331809044 +0000 UTC m=+0.162340609 container start dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:15:45 np0005464214 podman[124564]: 2025-10-01 13:15:45.334885025 +0000 UTC m=+0.165416620 container attach dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:15:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:45 np0005464214 python3.9[124736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:46 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct  1 09:15:46 np0005464214 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]: {
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "osd_id": 0,
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "type": "bluestore"
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:    },
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "osd_id": 2,
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "type": "bluestore"
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:    },
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "osd_id": 1,
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:        "type": "bluestore"
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]:    }
Oct  1 09:15:46 np0005464214 quirky_blackburn[124604]: }
Oct  1 09:15:46 np0005464214 systemd[1]: libpod-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope: Deactivated successfully.
Oct  1 09:15:46 np0005464214 conmon[124604]: conmon dfeab916944186e050d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope/container/memory.events
Oct  1 09:15:46 np0005464214 podman[124564]: 2025-10-01 13:15:46.321711326 +0000 UTC m=+1.152242921 container died dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:15:46 np0005464214 python3.9[124837]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3-merged.mount: Deactivated successfully.
Oct  1 09:15:46 np0005464214 podman[124564]: 2025-10-01 13:15:46.591247767 +0000 UTC m=+1.421779322 container remove dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:15:46 np0005464214 systemd[1]: libpod-conmon-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope: Deactivated successfully.
Oct  1 09:15:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:15:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:15:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:15:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:15:46 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fe0619f7-482f-4d5c-b82c-c6f749800991 does not exist
Oct  1 09:15:46 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 95d2b674-2df3-4c45-8028-118dba3b6622 does not exist
Oct  1 09:15:47 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct  1 09:15:47 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct  1 09:15:47 np0005464214 python3.9[125056]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:15:47 np0005464214 systemd[1]: Reloading.
Oct  1 09:15:47 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:15:47 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:15:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:15:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:15:47 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:15:47 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:15:47 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:15:47 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:15:47
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log']
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:15:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:47 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Oct  1 09:15:48 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Oct  1 09:15:48 np0005464214 python3.9[125248]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:15:48 np0005464214 network[125265]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:15:48 np0005464214 network[125266]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:15:48 np0005464214 network[125267]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:15:49 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct  1 09:15:49 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct  1 09:15:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:50 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Oct  1 09:15:51 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Oct  1 09:15:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:52 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct  1 09:15:52 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct  1 09:15:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:53 np0005464214 python3.9[125532]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:53 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct  1 09:15:54 np0005464214 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct  1 09:15:54 np0005464214 python3.9[125610]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:55 np0005464214 python3.9[125762]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:15:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:56 np0005464214 python3.9[125914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:56 np0005464214 python3.9[125992]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:15:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:15:57 np0005464214 python3.9[126144]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  1 09:15:57 np0005464214 systemd[1]: Starting Time & Date Service...
Oct  1 09:15:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:57 np0005464214 systemd[1]: Started Time & Date Service.
Oct  1 09:15:58 np0005464214 python3.9[126300]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:15:59 np0005464214 python3.9[126452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:15:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:15:59 np0005464214 python3.9[126530]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:00 np0005464214 python3.9[126682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:01 np0005464214 python3.9[126760]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5ddyw9zy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:02 np0005464214 python3.9[126912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:02 np0005464214 python3.9[126990]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:03 np0005464214 python3.9[127142]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:16:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:04 np0005464214 python3[127295]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 09:16:05 np0005464214 python3.9[127447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:05 np0005464214 python3.9[127525]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:06 np0005464214 python3.9[127679]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:06 np0005464214 python3.9[127757]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:07 np0005464214 python3.9[127909]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:08 np0005464214 python3.9[127987]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:08 np0005464214 python3.9[128139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:09 np0005464214 python3.9[128217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:10 np0005464214 python3.9[128369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:10 np0005464214 python3.9[128447]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:11 np0005464214 python3.9[128599]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:16:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:12 np0005464214 python3.9[128754]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:13 np0005464214 python3.9[128906]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:14 np0005464214 python3.9[129058]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:14 np0005464214 python3.9[129212]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 09:16:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:15 np0005464214 python3.9[129364]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  1 09:16:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:16 np0005464214 systemd[1]: session-40.scope: Deactivated successfully.
Oct  1 09:16:16 np0005464214 systemd[1]: session-40.scope: Consumed 32.686s CPU time.
Oct  1 09:16:16 np0005464214 systemd-logind[818]: Session 40 logged out. Waiting for processes to exit.
Oct  1 09:16:16 np0005464214 systemd-logind[818]: Removed session 40.
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:16:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:21 np0005464214 systemd-logind[818]: New session 41 of user zuul.
Oct  1 09:16:21 np0005464214 systemd[1]: Started Session 41 of User zuul.
Oct  1 09:16:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:22 np0005464214 python3.9[129545]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  1 09:16:23 np0005464214 python3.9[129697]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:16:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:24 np0005464214 python3.9[129851]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  1 09:16:25 np0005464214 python3.9[130005]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.g7ci6xsn follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:16:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:25 np0005464214 python3.9[130130]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.g7ci6xsn mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324584.4916277-44-227339708361216/.source.g7ci6xsn _original_basename=.8_w8m315 follow=False checksum=4cbd468ec54f05af8d39c16a8e0b3b79c637512f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:27 np0005464214 python3.9[130282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:16:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:27 np0005464214 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  1 09:16:28 np0005464214 python3.9[130434]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQuc3bhfyzL595OFOLV247IpwwrNv1jbuEyuIMlhGVL9o/JSyWTFuOVfeOlp2bgaV1HmT029a0g6F2wKmJyCLyTmUlSHjvFu+5OYahUrcWRA5wdTNonHdPtV7OxmGUyid1pIpbNVNRW3jpvnxoiRnI9We0KEWETWj0KsbyuQEnHthqnNEbvu9ZDWHKO3WwnNiEt4TvlIrnPpVac+Q9mG4Iqcsl1qDYx9ZKPuVLtYXvEtxENwTCfYUN7Nt9v/5SUlGTGxFlLR/tBKFw98HNvii7zAkpst6QHrOpcFmWYO6LMkxVjz0aIZvNUsbfKtfnSgjUBuC6Oy/QuzhKisWbFqPENpGofP9VCenS2zfCHewrnjhYCM6/NX7PzTVH0vkxCO2C5+xXm6HIvDZPnYfSL50+z5xfZXpuB7I8mKze82lkWdpFMkvmglXmjoEQgmrbl5kPRhq0yteRkbyyR6B/0X02dml1bPXU3azBrbTQNImgJeKRX8yZGL3Bbsfl5VMT+r8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgRSLYQNGHBrZk4XBkcn+kfWXhVXnPjRWsejgHIwyOG#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQp4ff+5X+OCwYApPStN8XgACWS/2O/jZ6Xj4flPyrz/owAZoGD9kAYm/48KAYQYbXLvyoq8TZyZOgBYKe6Lcs=#012 create=True mode=0644 path=/tmp/ansible.g7ci6xsn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:29 np0005464214 python3.9[130588]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.g7ci6xsn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:16:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:30 np0005464214 python3.9[130742]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.g7ci6xsn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:30 np0005464214 systemd[1]: session-41.scope: Deactivated successfully.
Oct  1 09:16:30 np0005464214 systemd[1]: session-41.scope: Consumed 6.245s CPU time.
Oct  1 09:16:30 np0005464214 systemd-logind[818]: Session 41 logged out. Waiting for processes to exit.
Oct  1 09:16:30 np0005464214 systemd-logind[818]: Removed session 41.
Oct  1 09:16:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:35 np0005464214 systemd-logind[818]: New session 42 of user zuul.
Oct  1 09:16:35 np0005464214 systemd[1]: Started Session 42 of User zuul.
Oct  1 09:16:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:36 np0005464214 python3.9[130923]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:16:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:38 np0005464214 python3.9[131079]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  1 09:16:39 np0005464214 python3.9[131233]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:16:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:40 np0005464214 python3.9[131386]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:16:41 np0005464214 python3.9[131539]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:16:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:42 np0005464214 python3.9[131691]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:16:43 np0005464214 systemd[1]: session-42.scope: Deactivated successfully.
Oct  1 09:16:43 np0005464214 systemd[1]: session-42.scope: Consumed 4.207s CPU time.
Oct  1 09:16:43 np0005464214 systemd-logind[818]: Session 42 logged out. Waiting for processes to exit.
Oct  1 09:16:43 np0005464214 systemd-logind[818]: Removed session 42.
Oct  1 09:16:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:16:47
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups']
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:16:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:16:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:16:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:16:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:16:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:16:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:16:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4add7e12-f0a8-4e9b-a2bf-c6b545e1a6bb does not exist
Oct  1 09:16:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a167fa7d-9350-4ce4-9661-a3d19f97a952 does not exist
Oct  1 09:16:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 07c6cde4-e5a8-41ee-b5e9-a74cb50d6b3e does not exist
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:16:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:16:48 np0005464214 systemd-logind[818]: New session 43 of user zuul.
Oct  1 09:16:48 np0005464214 systemd[1]: Started Session 43 of User zuul.
Oct  1 09:16:48 np0005464214 podman[131990]: 2025-10-01 13:16:48.695116415 +0000 UTC m=+0.029153010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:16:49 np0005464214 podman[131990]: 2025-10-01 13:16:49.038613127 +0000 UTC m=+0.372649682 container create 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:16:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:16:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:16:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:16:49 np0005464214 systemd[1]: Started libpod-conmon-5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479.scope.
Oct  1 09:16:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:16:49 np0005464214 podman[131990]: 2025-10-01 13:16:49.204910187 +0000 UTC m=+0.538946772 container init 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:16:49 np0005464214 podman[131990]: 2025-10-01 13:16:49.215100084 +0000 UTC m=+0.549136629 container start 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:16:49 np0005464214 admiring_benz[132059]: 167 167
Oct  1 09:16:49 np0005464214 systemd[1]: libpod-5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479.scope: Deactivated successfully.
Oct  1 09:16:49 np0005464214 podman[131990]: 2025-10-01 13:16:49.245396783 +0000 UTC m=+0.579433308 container attach 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:16:49 np0005464214 podman[131990]: 2025-10-01 13:16:49.246632166 +0000 UTC m=+0.580668701 container died 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:16:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-aec32489dbe62dd4ee16e10aed14ca9c6ad851e6cabcac2b1bac44b0db6488a0-merged.mount: Deactivated successfully.
Oct  1 09:16:49 np0005464214 podman[131990]: 2025-10-01 13:16:49.474049602 +0000 UTC m=+0.808086107 container remove 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:16:49 np0005464214 systemd[1]: libpod-conmon-5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479.scope: Deactivated successfully.
Oct  1 09:16:49 np0005464214 podman[132182]: 2025-10-01 13:16:49.769984004 +0000 UTC m=+0.108048388 container create 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  1 09:16:49 np0005464214 podman[132182]: 2025-10-01 13:16:49.709065684 +0000 UTC m=+0.047130088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:16:49 np0005464214 systemd[1]: Started libpod-conmon-696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d.scope.
Oct  1 09:16:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:16:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:49 np0005464214 python3.9[132176]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:16:49 np0005464214 podman[132182]: 2025-10-01 13:16:49.93387521 +0000 UTC m=+0.271939624 container init 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:16:49 np0005464214 podman[132182]: 2025-10-01 13:16:49.946253053 +0000 UTC m=+0.284317457 container start 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:16:49 np0005464214 podman[132182]: 2025-10-01 13:16:49.971582558 +0000 UTC m=+0.309646992 container attach 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:16:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:50 np0005464214 pensive_beaver[132199]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:16:50 np0005464214 pensive_beaver[132199]: --> relative data size: 1.0
Oct  1 09:16:50 np0005464214 pensive_beaver[132199]: --> All data devices are unavailable
Oct  1 09:16:50 np0005464214 systemd[1]: libpod-696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d.scope: Deactivated successfully.
Oct  1 09:16:50 np0005464214 podman[132182]: 2025-10-01 13:16:50.941920464 +0000 UTC m=+1.279984848 container died 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:16:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a-merged.mount: Deactivated successfully.
Oct  1 09:16:51 np0005464214 python3.9[132373]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:16:51 np0005464214 podman[132182]: 2025-10-01 13:16:51.153183816 +0000 UTC m=+1.491248200 container remove 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:16:51 np0005464214 systemd[1]: libpod-conmon-696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d.scope: Deactivated successfully.
Oct  1 09:16:51 np0005464214 systemd[1]: session-18.scope: Deactivated successfully.
Oct  1 09:16:51 np0005464214 systemd[1]: session-18.scope: Consumed 1min 26.263s CPU time.
Oct  1 09:16:51 np0005464214 systemd-logind[818]: Session 18 logged out. Waiting for processes to exit.
Oct  1 09:16:51 np0005464214 systemd-logind[818]: Removed session 18.
Oct  1 09:16:51 np0005464214 podman[132620]: 2025-10-01 13:16:51.795336114 +0000 UTC m=+0.059805700 container create 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:16:51 np0005464214 systemd[1]: Started libpod-conmon-3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72.scope.
Oct  1 09:16:51 np0005464214 podman[132620]: 2025-10-01 13:16:51.760365473 +0000 UTC m=+0.024835079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:16:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:16:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:51 np0005464214 podman[132620]: 2025-10-01 13:16:51.920106705 +0000 UTC m=+0.184576321 container init 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:16:51 np0005464214 podman[132620]: 2025-10-01 13:16:51.92798968 +0000 UTC m=+0.192459266 container start 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:16:51 np0005464214 objective_babbage[132636]: 167 167
Oct  1 09:16:51 np0005464214 systemd[1]: libpod-3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72.scope: Deactivated successfully.
Oct  1 09:16:51 np0005464214 podman[132620]: 2025-10-01 13:16:51.962940281 +0000 UTC m=+0.227409907 container attach 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:16:51 np0005464214 podman[132620]: 2025-10-01 13:16:51.964543267 +0000 UTC m=+0.229012893 container died 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:16:51 np0005464214 python3.9[132619]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  1 09:16:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b134e942197b56abf6cc07ec97a2d790aeaeeca3ed5ab3f7fa2754eaa8d5b9b7-merged.mount: Deactivated successfully.
Oct  1 09:16:52 np0005464214 podman[132620]: 2025-10-01 13:16:52.20045127 +0000 UTC m=+0.464920886 container remove 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:16:52 np0005464214 systemd[1]: libpod-conmon-3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72.scope: Deactivated successfully.
Oct  1 09:16:52 np0005464214 podman[132661]: 2025-10-01 13:16:52.437061318 +0000 UTC m=+0.089155606 container create 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:16:52 np0005464214 podman[132661]: 2025-10-01 13:16:52.39188575 +0000 UTC m=+0.043980068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:16:52 np0005464214 systemd[1]: Started libpod-conmon-2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d.scope.
Oct  1 09:16:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:16:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:52 np0005464214 podman[132661]: 2025-10-01 13:16:52.601077969 +0000 UTC m=+0.253172257 container init 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:16:52 np0005464214 podman[132661]: 2025-10-01 13:16:52.613773063 +0000 UTC m=+0.265867341 container start 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:16:52 np0005464214 podman[132661]: 2025-10-01 13:16:52.653809912 +0000 UTC m=+0.305904220 container attach 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]: {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:    "0": [
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:        {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "devices": [
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "/dev/loop3"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            ],
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_name": "ceph_lv0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_size": "21470642176",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "name": "ceph_lv0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "tags": {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cluster_name": "ceph",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.crush_device_class": "",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.encrypted": "0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osd_id": "0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.type": "block",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.vdo": "0"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            },
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "type": "block",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "vg_name": "ceph_vg0"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:        }
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:    ],
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:    "1": [
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:        {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "devices": [
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "/dev/loop4"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            ],
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_name": "ceph_lv1",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_size": "21470642176",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "name": "ceph_lv1",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "tags": {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cluster_name": "ceph",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.crush_device_class": "",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.encrypted": "0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osd_id": "1",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.type": "block",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.vdo": "0"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            },
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "type": "block",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "vg_name": "ceph_vg1"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:        }
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:    ],
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:    "2": [
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:        {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "devices": [
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "/dev/loop5"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            ],
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_name": "ceph_lv2",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_size": "21470642176",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "name": "ceph_lv2",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "tags": {
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.cluster_name": "ceph",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.crush_device_class": "",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.encrypted": "0",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osd_id": "2",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.type": "block",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:                "ceph.vdo": "0"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            },
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "type": "block",
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:            "vg_name": "ceph_vg2"
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:        }
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]:    ]
Oct  1 09:16:53 np0005464214 infallible_yonath[132677]: }
Oct  1 09:16:53 np0005464214 systemd[1]: libpod-2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d.scope: Deactivated successfully.
Oct  1 09:16:53 np0005464214 podman[132661]: 2025-10-01 13:16:53.354305689 +0000 UTC m=+1.006399977 container died 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:16:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee-merged.mount: Deactivated successfully.
Oct  1 09:16:53 np0005464214 podman[132661]: 2025-10-01 13:16:53.623521776 +0000 UTC m=+1.275616054 container remove 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:16:53 np0005464214 systemd[1]: libpod-conmon-2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d.scope: Deactivated successfully.
Oct  1 09:16:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:54 np0005464214 python3.9[132923]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.405407148 +0000 UTC m=+0.070782365 container create 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.355522895 +0000 UTC m=+0.020898132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:16:54 np0005464214 systemd[1]: Started libpod-conmon-98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209.scope.
Oct  1 09:16:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.581684738 +0000 UTC m=+0.247060045 container init 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.589541742 +0000 UTC m=+0.254916929 container start 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:16:54 np0005464214 modest_moser[133009]: 167 167
Oct  1 09:16:54 np0005464214 systemd[1]: libpod-98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209.scope: Deactivated successfully.
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.627629362 +0000 UTC m=+0.293004649 container attach 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.630007246 +0000 UTC m=+0.295382463 container died 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:16:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-886b0a14d1afc40cb8fa2e7e05e5f859a761387bc823ae29d2aec03caff53c96-merged.mount: Deactivated successfully.
Oct  1 09:16:54 np0005464214 podman[132993]: 2025-10-01 13:16:54.812872806 +0000 UTC m=+0.478248023 container remove 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:16:54 np0005464214 systemd[1]: libpod-conmon-98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209.scope: Deactivated successfully.
Oct  1 09:16:55 np0005464214 podman[133108]: 2025-10-01 13:16:55.072391815 +0000 UTC m=+0.080567558 container create 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:16:55 np0005464214 podman[133108]: 2025-10-01 13:16:55.023850748 +0000 UTC m=+0.032026551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:16:55 np0005464214 systemd[1]: Started libpod-conmon-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope.
Oct  1 09:16:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:16:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:16:55 np0005464214 podman[133108]: 2025-10-01 13:16:55.24656599 +0000 UTC m=+0.254741793 container init 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:16:55 np0005464214 podman[133108]: 2025-10-01 13:16:55.259102458 +0000 UTC m=+0.267278201 container start 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:16:55 np0005464214 podman[133108]: 2025-10-01 13:16:55.326320387 +0000 UTC m=+0.334496130 container attach 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:16:55 np0005464214 python3.9[133203]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 09:16:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:16:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]: {
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "osd_id": 0,
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "type": "bluestore"
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:    },
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "osd_id": 2,
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "type": "bluestore"
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:    },
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "osd_id": 1,
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:        "type": "bluestore"
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]:    }
Oct  1 09:16:56 np0005464214 intelligent_agnesi[133137]: }
Oct  1 09:16:56 np0005464214 systemd[1]: libpod-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope: Deactivated successfully.
Oct  1 09:16:56 np0005464214 podman[133108]: 2025-10-01 13:16:56.311701899 +0000 UTC m=+1.319877642 container died 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 09:16:56 np0005464214 systemd[1]: libpod-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope: Consumed 1.056s CPU time.
Oct  1 09:16:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa-merged.mount: Deactivated successfully.
Oct  1 09:16:56 np0005464214 podman[133108]: 2025-10-01 13:16:56.395925232 +0000 UTC m=+1.404100955 container remove 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:16:56 np0005464214 systemd[1]: libpod-conmon-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope: Deactivated successfully.
Oct  1 09:16:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:16:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:16:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:16:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 70fc0b91-1c27-479e-b17c-d914cdd029b6 does not exist
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev aaa8d604-9c05-42ac-ae50-d9e6ac5344a5 does not exist
Oct  1 09:16:56 np0005464214 python3.9[133379]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:16:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:16:57 np0005464214 python3.9[133595]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:16:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:16:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:16:57 np0005464214 systemd[1]: session-43.scope: Deactivated successfully.
Oct  1 09:16:57 np0005464214 systemd[1]: session-43.scope: Consumed 6.259s CPU time.
Oct  1 09:16:57 np0005464214 systemd-logind[818]: Session 43 logged out. Waiting for processes to exit.
Oct  1 09:16:57 np0005464214 systemd-logind[818]: Removed session 43.
Oct  1 09:16:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:16:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:02 np0005464214 systemd-logind[818]: New session 44 of user zuul.
Oct  1 09:17:02 np0005464214 systemd[1]: Started Session 44 of User zuul.
Oct  1 09:17:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:03 np0005464214 python3.9[133774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:17:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:05 np0005464214 python3.9[133930]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:06 np0005464214 python3.9[134082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:07 np0005464214 python3.9[134235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:07 np0005464214 python3.9[134358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324626.4382172-65-224804694277140/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b9fa2a794cb9bb11a680f9f94d271635d0bb57f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:08 np0005464214 python3.9[134510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:08 np0005464214 python3.9[134633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324627.8701072-65-64294120392274/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ad8d658a88600c09a5e73bc2aedff1b9c3ca8413 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:09 np0005464214 python3.9[134785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:10 np0005464214 python3.9[134908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324629.0596507-65-29320201257641/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a9c9520c160593e5fde00102171f94da1aff2a8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:10 np0005464214 python3.9[135060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:11 np0005464214 python3.9[135212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:12 np0005464214 python3.9[135364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:12 np0005464214 python3.9[135487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324631.622709-124-43326251190952/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b8ff7b3142d7df68d546af11a3e168a78877cc9d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:13 np0005464214 python3.9[135639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:14 np0005464214 python3.9[135762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324632.8767245-124-195726650198005/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ef3deb6a220b9ac95487eeab2c91b47cd4f38015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:14 np0005464214 python3.9[135914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:15 np0005464214 python3.9[136037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324634.2179275-124-1610352530114/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2ea0a31ae214ecaf6d723593ef235e827d28ae61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:16 np0005464214 python3.9[136191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:16 np0005464214 python3.9[136343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:17 np0005464214 python3.9[136497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:17:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:18 np0005464214 python3.9[136620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324637.087863-183-52005592253955/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=80e17c4f9d1023d4423514bc3bb574c53d852795 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:19 np0005464214 python3.9[136772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:19 np0005464214 python3.9[136895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324638.6106455-183-187100655429771/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ef3deb6a220b9ac95487eeab2c91b47cd4f38015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:20 np0005464214 python3.9[137047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:20 np0005464214 python3.9[137170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324639.8477578-183-181681424725932/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=089a67abf9d2f0871caa69cb06eab193e51fbefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:22 np0005464214 python3.9[137322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:22 np0005464214 python3.9[137474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:23 np0005464214 python3.9[137597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324642.2245195-251-78918481527437/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:24 np0005464214 python3.9[137749]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:24 np0005464214 python3.9[137901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:25 np0005464214 python3.9[138024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324644.3884854-275-159752533263008/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:26 np0005464214 python3.9[138176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:27 np0005464214 python3.9[138328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:27 np0005464214 python3.9[138451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324646.5371292-299-76069280003779/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:28 np0005464214 python3.9[138603]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:29 np0005464214 python3.9[138755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:30 np0005464214 python3.9[138878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324648.8199792-323-94025332381097/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:30 np0005464214 python3.9[139030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:31 np0005464214 python3.9[139182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:32 np0005464214 python3.9[139305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324651.0733912-347-198621363234078/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:32 np0005464214 python3.9[139457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:33 np0005464214 python3.9[139609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:34 np0005464214 python3.9[139732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324653.0915656-371-32166083566747/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:34 np0005464214 systemd-logind[818]: Session 44 logged out. Waiting for processes to exit.
Oct  1 09:17:34 np0005464214 systemd[1]: session-44.scope: Deactivated successfully.
Oct  1 09:17:34 np0005464214 systemd[1]: session-44.scope: Consumed 24.436s CPU time.
Oct  1 09:17:34 np0005464214 systemd-logind[818]: Removed session 44.
Oct  1 09:17:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:40 np0005464214 systemd-logind[818]: New session 45 of user zuul.
Oct  1 09:17:40 np0005464214 systemd[1]: Started Session 45 of User zuul.
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.647439) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660647573, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1617, "num_deletes": 252, "total_data_size": 2373344, "memory_usage": 2409592, "flush_reason": "Manual Compaction"}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660793907, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1379194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7443, "largest_seqno": 9059, "table_properties": {"data_size": 1373902, "index_size": 2368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15135, "raw_average_key_size": 20, "raw_value_size": 1361467, "raw_average_value_size": 1847, "num_data_blocks": 112, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324502, "oldest_key_time": 1759324502, "file_creation_time": 1759324660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 146510 microseconds, and 8124 cpu microseconds.
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.793983) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1379194 bytes OK
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.794018) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.824961) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.824996) EVENT_LOG_v1 {"time_micros": 1759324660824984, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.825032) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2366165, prev total WAL file size 2366165, number of live WAL files 2.
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.826419) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1346KB)], [20(7390KB)]
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660826516, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8947053, "oldest_snapshot_seqno": -1}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3405 keys, 7085856 bytes, temperature: kUnknown
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660952012, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7085856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7059435, "index_size": 16775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81512, "raw_average_key_size": 23, "raw_value_size": 6994311, "raw_average_value_size": 2054, "num_data_blocks": 741, "num_entries": 3405, "num_filter_entries": 3405, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.952396) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7085856 bytes
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.953804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.2 rd, 56.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 3848, records dropped: 443 output_compression: NoCompression
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.953823) EVENT_LOG_v1 {"time_micros": 1759324660953814, "job": 6, "event": "compaction_finished", "compaction_time_micros": 125718, "compaction_time_cpu_micros": 30546, "output_level": 6, "num_output_files": 1, "total_output_size": 7085856, "num_input_records": 3848, "num_output_records": 3405, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660954625, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660956131, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.826272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:17:40 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:17:41 np0005464214 python3.9[139916]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:41 np0005464214 python3.9[140070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:42 np0005464214 python3.9[140193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324661.344504-34-28235488929367/.source.conf _original_basename=ceph.conf follow=False checksum=86adabd2b76c58b2ebe51f5b2fa78db6f8424e89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:43 np0005464214 python3.9[140345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:17:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:43 np0005464214 python3.9[140468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324662.908848-34-171146042036872/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=cb7a726d0a2db4bead6fc30d6d9fab3edee0b4fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:17:44 np0005464214 systemd[1]: session-45.scope: Deactivated successfully.
Oct  1 09:17:44 np0005464214 systemd[1]: session-45.scope: Consumed 2.897s CPU time.
Oct  1 09:17:44 np0005464214 systemd-logind[818]: Session 45 logged out. Waiting for processes to exit.
Oct  1 09:17:44 np0005464214 systemd-logind[818]: Removed session 45.
Oct  1 09:17:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:17:47
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images']
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:17:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:49 np0005464214 systemd-logind[818]: New session 46 of user zuul.
Oct  1 09:17:49 np0005464214 systemd[1]: Started Session 46 of User zuul.
Oct  1 09:17:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:50 np0005464214 python3.9[140646]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:17:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:51 np0005464214 python3.9[140802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:52 np0005464214 python3.9[140954]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:17:53 np0005464214 python3.9[141104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:17:53 np0005464214 python3.9[141256]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  1 09:17:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:17:55 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  1 09:17:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:56 np0005464214 python3.9[141413]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:17:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:17:57 np0005464214 python3.9[141576]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:17:57 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fce599b7-8cfd-4618-a8c6-1036b12a7292 does not exist
Oct  1 09:17:57 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5502e1d0-5473-4cba-855f-88c888b08091 does not exist
Oct  1 09:17:57 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev de12ed83-7c77-4ca2-a97f-fcb3c553a2e0 does not exist
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:17:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:17:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:58.010285669 +0000 UTC m=+0.051936635 container create ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:17:58 np0005464214 systemd[1]: Started libpod-conmon-ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd.scope.
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:57.995344951 +0000 UTC m=+0.036995917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:17:58 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:58.118567475 +0000 UTC m=+0.160218481 container init ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:58.12495998 +0000 UTC m=+0.166610976 container start ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:17:58 np0005464214 naughty_diffie[141789]: 167 167
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:58.129316595 +0000 UTC m=+0.170967581 container attach ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:17:58 np0005464214 systemd[1]: libpod-ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd.scope: Deactivated successfully.
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:58.129831981 +0000 UTC m=+0.171482967 container died ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:17:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d2b6841a925c8eb6751ad4292e9408bc2873951fb7874aebbf1f69f4e916da8f-merged.mount: Deactivated successfully.
Oct  1 09:17:58 np0005464214 podman[141771]: 2025-10-01 13:17:58.261562705 +0000 UTC m=+0.303213701 container remove ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:17:58 np0005464214 systemd[1]: libpod-conmon-ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd.scope: Deactivated successfully.
Oct  1 09:17:58 np0005464214 podman[141837]: 2025-10-01 13:17:58.493493197 +0000 UTC m=+0.046332294 container create 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:17:58 np0005464214 systemd[1]: Started libpod-conmon-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope.
Oct  1 09:17:58 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:17:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:17:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:17:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:17:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:17:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:17:58 np0005464214 podman[141837]: 2025-10-01 13:17:58.474803242 +0000 UTC m=+0.027642359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:17:58 np0005464214 podman[141837]: 2025-10-01 13:17:58.580186449 +0000 UTC m=+0.133025546 container init 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:17:58 np0005464214 podman[141837]: 2025-10-01 13:17:58.587963317 +0000 UTC m=+0.140802414 container start 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:17:58 np0005464214 podman[141837]: 2025-10-01 13:17:58.592102095 +0000 UTC m=+0.144941182 container attach 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:17:59 np0005464214 python3.9[141991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:17:59 np0005464214 quizzical_bassi[141876]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:17:59 np0005464214 quizzical_bassi[141876]: --> relative data size: 1.0
Oct  1 09:17:59 np0005464214 quizzical_bassi[141876]: --> All data devices are unavailable
Oct  1 09:17:59 np0005464214 systemd[1]: libpod-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope: Deactivated successfully.
Oct  1 09:17:59 np0005464214 systemd[1]: libpod-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope: Consumed 1.028s CPU time.
Oct  1 09:17:59 np0005464214 podman[141837]: 2025-10-01 13:17:59.675328896 +0000 UTC m=+1.228167993 container died 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:17:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792-merged.mount: Deactivated successfully.
Oct  1 09:17:59 np0005464214 podman[141837]: 2025-10-01 13:17:59.745660914 +0000 UTC m=+1.298500001 container remove 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:17:59 np0005464214 systemd[1]: libpod-conmon-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope: Deactivated successfully.
Oct  1 09:17:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:18:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2035 writes, 9080 keys, 2035 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2035 writes, 9080 keys, 2035 commit groups, 1.0 writes per commit group, ingest: 11.41 MB, 0.02 MB/s#012Interval WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     40.2      0.21              0.03         3    0.071       0      0       0.0       0.0#012  L6      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     83.8     74.1      0.19              0.05         2    0.094    7249    733       0.0       0.0#012 Sum      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     39.3     56.1      0.40              0.08         5    0.080    7249    733       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     40.5     57.6      0.39              0.08         4    0.098    7249    733       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     83.8     74.1      0.19              0.05         2    0.094    7249    733       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.2      0.20              0.03         2    0.101       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.4 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 308.00 MB usage: 553.91 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,462.19 KB,0.146544%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.376494715 +0000 UTC m=+0.038562075 container create b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:18:00 np0005464214 systemd[1]: Started libpod-conmon-b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce.scope.
Oct  1 09:18:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.454010885 +0000 UTC m=+0.116078265 container init b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.360201115 +0000 UTC m=+0.022268505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.466021624 +0000 UTC m=+0.128089004 container start b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:18:00 np0005464214 friendly_einstein[142329]: 167 167
Oct  1 09:18:00 np0005464214 systemd[1]: libpod-b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce.scope: Deactivated successfully.
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.472279096 +0000 UTC m=+0.134346466 container attach b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.472539304 +0000 UTC m=+0.134606674 container died b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:18:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7ba8ee00d296a275af24909431f83e3be2061fbb28d069326a251761686e6938-merged.mount: Deactivated successfully.
Oct  1 09:18:00 np0005464214 podman[142269]: 2025-10-01 13:18:00.519499816 +0000 UTC m=+0.181567186 container remove b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:18:00 np0005464214 systemd[1]: libpod-conmon-b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce.scope: Deactivated successfully.
Oct  1 09:18:00 np0005464214 python3[142334]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  1 09:18:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:00 np0005464214 podman[142356]: 2025-10-01 13:18:00.70297383 +0000 UTC m=+0.041861317 container create c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:18:00 np0005464214 systemd[1]: Started libpod-conmon-c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61.scope.
Oct  1 09:18:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:18:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:00 np0005464214 podman[142356]: 2025-10-01 13:18:00.683562603 +0000 UTC m=+0.022450110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:18:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:00 np0005464214 podman[142356]: 2025-10-01 13:18:00.807261602 +0000 UTC m=+0.146149099 container init c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:18:00 np0005464214 podman[142356]: 2025-10-01 13:18:00.815568097 +0000 UTC m=+0.154455564 container start c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:18:00 np0005464214 podman[142356]: 2025-10-01 13:18:00.820090996 +0000 UTC m=+0.158978483 container attach c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:18:01 np0005464214 python3.9[142529]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:01 np0005464214 boring_burnell[142397]: {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:    "0": [
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:        {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "devices": [
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "/dev/loop3"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            ],
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_name": "ceph_lv0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_size": "21470642176",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "name": "ceph_lv0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "tags": {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cluster_name": "ceph",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.crush_device_class": "",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.encrypted": "0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osd_id": "0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.type": "block",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.vdo": "0"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            },
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "type": "block",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "vg_name": "ceph_vg0"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:        }
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:    ],
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:    "1": [
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:        {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "devices": [
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "/dev/loop4"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            ],
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_name": "ceph_lv1",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_size": "21470642176",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "name": "ceph_lv1",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "tags": {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cluster_name": "ceph",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.crush_device_class": "",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.encrypted": "0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osd_id": "1",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.type": "block",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.vdo": "0"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            },
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "type": "block",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "vg_name": "ceph_vg1"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:        }
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:    ],
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:    "2": [
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:        {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "devices": [
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "/dev/loop5"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            ],
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_name": "ceph_lv2",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_size": "21470642176",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "name": "ceph_lv2",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "tags": {
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.cluster_name": "ceph",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.crush_device_class": "",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.encrypted": "0",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osd_id": "2",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.type": "block",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:                "ceph.vdo": "0"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            },
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "type": "block",
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:            "vg_name": "ceph_vg2"
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:        }
Oct  1 09:18:01 np0005464214 boring_burnell[142397]:    ]
Oct  1 09:18:01 np0005464214 boring_burnell[142397]: }
Oct  1 09:18:01 np0005464214 systemd[1]: libpod-c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61.scope: Deactivated successfully.
Oct  1 09:18:01 np0005464214 podman[142356]: 2025-10-01 13:18:01.61486697 +0000 UTC m=+0.953754477 container died c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:18:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6-merged.mount: Deactivated successfully.
Oct  1 09:18:01 np0005464214 podman[142356]: 2025-10-01 13:18:01.700514289 +0000 UTC m=+1.039401776 container remove c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:18:01 np0005464214 systemd[1]: libpod-conmon-c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61.scope: Deactivated successfully.
Oct  1 09:18:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:02 np0005464214 python3.9[142808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.440916454 +0000 UTC m=+0.050252694 container create 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:18:02 np0005464214 systemd[1]: Started libpod-conmon-847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85.scope.
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.417518806 +0000 UTC m=+0.026855106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:18:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.530190485 +0000 UTC m=+0.139526745 container init 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.536629543 +0000 UTC m=+0.145965783 container start 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.541844833 +0000 UTC m=+0.151181113 container attach 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:18:02 np0005464214 jovial_nightingale[142855]: 167 167
Oct  1 09:18:02 np0005464214 systemd[1]: libpod-847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85.scope: Deactivated successfully.
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.545748473 +0000 UTC m=+0.155084713 container died 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:18:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-baadd02aa82d2c5e2f03e5f094fcea1f561743305bad6941fb6d7c089ba504e2-merged.mount: Deactivated successfully.
Oct  1 09:18:02 np0005464214 podman[142837]: 2025-10-01 13:18:02.588231097 +0000 UTC m=+0.197567327 container remove 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:18:02 np0005464214 systemd[1]: libpod-conmon-847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85.scope: Deactivated successfully.
Oct  1 09:18:02 np0005464214 podman[142939]: 2025-10-01 13:18:02.792409417 +0000 UTC m=+0.049928544 container create ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:18:02 np0005464214 systemd[1]: Started libpod-conmon-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope.
Oct  1 09:18:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:18:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:02 np0005464214 podman[142939]: 2025-10-01 13:18:02.774658672 +0000 UTC m=+0.032177829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:18:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:18:02 np0005464214 podman[142939]: 2025-10-01 13:18:02.88010511 +0000 UTC m=+0.137624277 container init ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:18:02 np0005464214 podman[142939]: 2025-10-01 13:18:02.886429914 +0000 UTC m=+0.143949051 container start ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:18:02 np0005464214 podman[142939]: 2025-10-01 13:18:02.890095716 +0000 UTC m=+0.147614873 container attach ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:18:02 np0005464214 python3.9[142959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:03 np0005464214 python3.9[143133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]: {
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "osd_id": 0,
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "type": "bluestore"
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:    },
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "osd_id": 2,
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "type": "bluestore"
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:    },
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "osd_id": 1,
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:        "type": "bluestore"
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]:    }
Oct  1 09:18:03 np0005464214 pensive_elgamal[142970]: }
Oct  1 09:18:03 np0005464214 systemd[1]: libpod-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope: Deactivated successfully.
Oct  1 09:18:03 np0005464214 podman[142939]: 2025-10-01 13:18:03.908500067 +0000 UTC m=+1.166019204 container died ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:18:03 np0005464214 systemd[1]: libpod-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope: Consumed 1.030s CPU time.
Oct  1 09:18:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43-merged.mount: Deactivated successfully.
Oct  1 09:18:03 np0005464214 podman[142939]: 2025-10-01 13:18:03.983650175 +0000 UTC m=+1.241169302 container remove ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:18:03 np0005464214 systemd[1]: libpod-conmon-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope: Deactivated successfully.
Oct  1 09:18:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:18:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:18:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:18:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:18:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4fa6f51c-faf3-4bc5-ae67-9a0e3d0ab705 does not exist
Oct  1 09:18:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3bca51a4-ab30-4b88-9dc2-0cb238e8ae60 does not exist
Oct  1 09:18:04 np0005464214 python3.9[143266]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xkgi35zp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:04 np0005464214 python3.9[143447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:18:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:18:05 np0005464214 python3.9[143525]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:06 np0005464214 python3.9[143677]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:07 np0005464214 python3[143830]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 09:18:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:08 np0005464214 python3.9[143982]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:09 np0005464214 python3.9[144107]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324687.704618-157-104862870495958/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:10 np0005464214 python3.9[144259]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:10 np0005464214 python3.9[144384]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324689.547799-172-172170631842449/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:11 np0005464214 python3.9[144536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:12 np0005464214 python3.9[144661]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324691.0129192-187-95134133507345/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:13 np0005464214 python3.9[144813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:13 np0005464214 python3.9[144938]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324692.5405037-202-249147039233566/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:14 np0005464214 python3.9[145090]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:15 np0005464214 python3.9[145215]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324693.9516916-217-209569732680396/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:16 np0005464214 python3.9[145367]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:17 np0005464214 python3.9[145519]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:18:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:18 np0005464214 python3.9[145674]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:19 np0005464214 python3.9[145826]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:19 np0005464214 python3.9[145979]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:18:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:20 np0005464214 python3.9[146133]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:21 np0005464214 python3.9[146288]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:22 np0005464214 python3.9[146438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:18:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:23 np0005464214 python3.9[146593]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:74:f6:ca:ec" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:23 np0005464214 ovs-vsctl[146594]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:74:f6:ca:ec external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  1 09:18:24 np0005464214 python3.9[146746]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:25 np0005464214 python3.9[146901]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:18:25 np0005464214 ovs-vsctl[146902]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  1 09:18:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:26 np0005464214 python3.9[147052]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:18:27 np0005464214 python3.9[147206]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:18:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:27 np0005464214 python3.9[147360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:28 np0005464214 python3.9[147438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:18:29 np0005464214 python3.9[147590]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:29 np0005464214 python3.9[147668]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:18:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:30 np0005464214 python3.9[147820]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:31 np0005464214 python3.9[147973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:31 np0005464214 python3.9[148052]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:32 np0005464214 python3.9[148204]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:33 np0005464214 python3.9[148282]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:33 np0005464214 python3.9[148434]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:18:33 np0005464214 systemd[1]: Reloading.
Oct  1 09:18:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:33 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:18:33 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:18:34 np0005464214 python3.9[148623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:35 np0005464214 python3.9[148701]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:36 np0005464214 python3.9[148853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:36 np0005464214 python3.9[148931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:37 np0005464214 python3.9[149083]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:18:37 np0005464214 systemd[1]: Reloading.
Oct  1 09:18:37 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:18:37 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:18:37 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:18:37 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:18:37 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:18:37 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:18:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:38 np0005464214 python3.9[149277]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:18:39 np0005464214 python3.9[149429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:40 np0005464214 python3.9[149552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324718.9331353-468-219059036755941/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:18:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:40 np0005464214 python3.9[149704]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:18:41 np0005464214 python3.9[149858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:18:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:42 np0005464214 python3.9[149981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324721.2204993-493-121768827405071/.source.json _original_basename=.gxqdhbbp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:42 np0005464214 python3.9[150133]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.117522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723117600, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 734, "num_deletes": 251, "total_data_size": 927993, "memory_usage": 941960, "flush_reason": "Manual Compaction"}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723126701, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 919656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9060, "largest_seqno": 9793, "table_properties": {"data_size": 915846, "index_size": 1590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8143, "raw_average_key_size": 18, "raw_value_size": 908285, "raw_average_value_size": 2068, "num_data_blocks": 74, "num_entries": 439, "num_filter_entries": 439, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324661, "oldest_key_time": 1759324661, "file_creation_time": 1759324723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9297 microseconds, and 5464 cpu microseconds.
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.126796) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 919656 bytes OK
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.126851) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128199) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128223) EVENT_LOG_v1 {"time_micros": 1759324723128216, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128246) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 924236, prev total WAL file size 924236, number of live WAL files 2.
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.129002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(898KB)], [23(6919KB)]
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723129046, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8005512, "oldest_snapshot_seqno": -1}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3330 keys, 6306410 bytes, temperature: kUnknown
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723177421, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6306410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6281720, "index_size": 15237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80730, "raw_average_key_size": 24, "raw_value_size": 6219110, "raw_average_value_size": 1867, "num_data_blocks": 663, "num_entries": 3330, "num_filter_entries": 3330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.177784) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6306410 bytes
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.179430) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 130.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.8 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(15.6) write-amplify(6.9) OK, records in: 3844, records dropped: 514 output_compression: NoCompression
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.179464) EVENT_LOG_v1 {"time_micros": 1759324723179448, "job": 8, "event": "compaction_finished", "compaction_time_micros": 48470, "compaction_time_cpu_micros": 28158, "output_level": 6, "num_output_files": 1, "total_output_size": 6306410, "num_input_records": 3844, "num_output_records": 3330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723179995, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723183004, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:18:43 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:18:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:45 np0005464214 python3.9[150562]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  1 09:18:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:46 np0005464214 python3.9[150714]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 09:18:47 np0005464214 python3.9[150866]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:18:47
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:18:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:49 np0005464214 python3[151044]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 09:18:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:54 np0005464214 podman[151059]: 2025-10-01 13:18:54.932719318 +0000 UTC m=+5.679547091 image pull 7ffac6b06b247caf26cf673b775a5f070f2fa1a6008cf0b0964af7e905ba86a5 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd
Oct  1 09:18:55 np0005464214 podman[151182]: 2025-10-01 13:18:55.166351173 +0000 UTC m=+0.077020012 container create 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 09:18:55 np0005464214 podman[151182]: 2025-10-01 13:18:55.128391874 +0000 UTC m=+0.039060764 image pull 7ffac6b06b247caf26cf673b775a5f070f2fa1a6008cf0b0964af7e905ba86a5 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd
Oct  1 09:18:55 np0005464214 python3[151044]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd
Oct  1 09:18:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:18:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:56 np0005464214 python3.9[151372]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:18:56 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:18:57 np0005464214 python3.9[151526]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:57 np0005464214 python3.9[151602]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:18:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:58 np0005464214 python3.9[151753]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324737.5084484-581-130711057280542/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:18:58 np0005464214 python3.9[151829]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:18:58 np0005464214 systemd[1]: Reloading.
Oct  1 09:18:58 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:18:58 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:18:59 np0005464214 python3.9[151940]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:18:59 np0005464214 systemd[1]: Reloading.
Oct  1 09:18:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:18:59 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:18:59 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:19:00 np0005464214 systemd[1]: Starting ovn_controller container...
Oct  1 09:19:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987770384d734843490fc415fb2ee473e75f002af4dc1b07e5543afd997383f6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:01 np0005464214 systemd[1]: Started /usr/bin/podman healthcheck run 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad.
Oct  1 09:19:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:02 np0005464214 podman[151980]: 2025-10-01 13:19:02.492292584 +0000 UTC m=+2.260379636 container init 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: + sudo -E kolla_set_configs
Oct  1 09:19:02 np0005464214 podman[151980]: 2025-10-01 13:19:02.529628252 +0000 UTC m=+2.297715214 container start 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:19:02 np0005464214 systemd[1]: Created slice User Slice of UID 0.
Oct  1 09:19:02 np0005464214 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  1 09:19:02 np0005464214 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  1 09:19:02 np0005464214 systemd[1]: Starting User Manager for UID 0...
Oct  1 09:19:02 np0005464214 edpm-start-podman-container[151980]: ovn_controller
Oct  1 09:19:02 np0005464214 systemd[152019]: Queued start job for default target Main User Target.
Oct  1 09:19:02 np0005464214 systemd[152019]: Created slice User Application Slice.
Oct  1 09:19:02 np0005464214 systemd[152019]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  1 09:19:02 np0005464214 systemd[152019]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 09:19:02 np0005464214 systemd[152019]: Reached target Paths.
Oct  1 09:19:02 np0005464214 systemd[152019]: Reached target Timers.
Oct  1 09:19:02 np0005464214 systemd[152019]: Starting D-Bus User Message Bus Socket...
Oct  1 09:19:02 np0005464214 systemd[152019]: Starting Create User's Volatile Files and Directories...
Oct  1 09:19:02 np0005464214 systemd[152019]: Finished Create User's Volatile Files and Directories.
Oct  1 09:19:02 np0005464214 systemd[152019]: Listening on D-Bus User Message Bus Socket.
Oct  1 09:19:02 np0005464214 systemd[152019]: Reached target Sockets.
Oct  1 09:19:02 np0005464214 systemd[152019]: Reached target Basic System.
Oct  1 09:19:02 np0005464214 systemd[152019]: Reached target Main User Target.
Oct  1 09:19:02 np0005464214 systemd[152019]: Startup finished in 194ms.
Oct  1 09:19:02 np0005464214 systemd[1]: Started User Manager for UID 0.
Oct  1 09:19:02 np0005464214 systemd[1]: Started Session c1 of User root.
Oct  1 09:19:02 np0005464214 podman[152006]: 2025-10-01 13:19:02.881676245 +0000 UTC m=+0.334369050 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:19:02 np0005464214 systemd[1]: 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad-6a7a58905eeb289d.service: Main process exited, code=exited, status=1/FAILURE
Oct  1 09:19:02 np0005464214 systemd[1]: 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad-6a7a58905eeb289d.service: Failed with result 'exit-code'.
Oct  1 09:19:02 np0005464214 edpm-start-podman-container[151979]: Creating additional drop-in dependency for "ovn_controller" (583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad)
Oct  1 09:19:02 np0005464214 systemd[1]: Reloading.
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: INFO:__main__:Validating config file
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: INFO:__main__:Writing out command to execute
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: ++ cat /run_command
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: + ARGS=
Oct  1 09:19:02 np0005464214 ovn_controller[151996]: + sudo kolla_copy_cacerts
Oct  1 09:19:03 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:19:03 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:19:03 np0005464214 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  1 09:19:03 np0005464214 systemd[1]: Started ovn_controller container.
Oct  1 09:19:03 np0005464214 systemd[1]: Started Session c2 of User root.
Oct  1 09:19:03 np0005464214 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: + [[ ! -n '' ]]
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: + . kolla_extend_start
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: + umask 0022
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.4524] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.4531] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.4542] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.4547] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.4550] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  1 09:19:03 np0005464214 kernel: br-int: entered promiscuous mode
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00017|main|INFO|OVS feature set changed, force recompute.
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.4832] manager: (ovn-35ad8f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  1 09:19:03 np0005464214 systemd-udevd[152135]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:19:03 np0005464214 kernel: genev_sys_6081: entered promiscuous mode
Oct  1 09:19:03 np0005464214 systemd-udevd[152137]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.5096] device (genev_sys_6081): carrier: link connected
Oct  1 09:19:03 np0005464214 NetworkManager[45411]: <info>  [1759324743.5099] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 09:19:03 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:03Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  1 09:19:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:04 np0005464214 python3.9[152268]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:19:04 np0005464214 ovs-vsctl[152269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  1 09:19:04 np0005464214 python3.9[152535]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:19:04 np0005464214 ovs-vsctl[152542]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:19:05 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e3abbc6c-6ef7-4a4f-bb38-92e73a6a4964 does not exist
Oct  1 09:19:05 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1e59db72-3cae-45dc-939c-48ce1f44f12d does not exist
Oct  1 09:19:05 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2e2a1ae8-85de-4eab-b1ce-96f87ecc05db does not exist
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:05 np0005464214 podman[152847]: 2025-10-01 13:19:05.909951253 +0000 UTC m=+0.063285483 container create af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:19:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:19:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:19:05 np0005464214 systemd[1]: Started libpod-conmon-af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f.scope.
Oct  1 09:19:05 np0005464214 podman[152847]: 2025-10-01 13:19:05.881590715 +0000 UTC m=+0.034925025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:19:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:06 np0005464214 podman[152847]: 2025-10-01 13:19:06.021486985 +0000 UTC m=+0.174821295 container init af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:19:06 np0005464214 podman[152847]: 2025-10-01 13:19:06.033239653 +0000 UTC m=+0.186573903 container start af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:19:06 np0005464214 podman[152847]: 2025-10-01 13:19:06.039606531 +0000 UTC m=+0.192940841 container attach af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:19:06 np0005464214 wizardly_meninsky[152865]: 167 167
Oct  1 09:19:06 np0005464214 systemd[1]: libpod-af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f.scope: Deactivated successfully.
Oct  1 09:19:06 np0005464214 podman[152847]: 2025-10-01 13:19:06.044152064 +0000 UTC m=+0.197486324 container died af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:19:06 np0005464214 python3.9[152849]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:19:06 np0005464214 ovs-vsctl[152871]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  1 09:19:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a57294c35eb46c72b698fe12b21fb4d1f884da6ef82bd9293ba1d1b9cf0d2636-merged.mount: Deactivated successfully.
Oct  1 09:19:06 np0005464214 podman[152847]: 2025-10-01 13:19:06.108882411 +0000 UTC m=+0.262216641 container remove af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:19:06 np0005464214 systemd[1]: libpod-conmon-af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f.scope: Deactivated successfully.
Oct  1 09:19:06 np0005464214 podman[152914]: 2025-10-01 13:19:06.299794859 +0000 UTC m=+0.041855642 container create e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:19:06 np0005464214 systemd[1]: Started libpod-conmon-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope.
Oct  1 09:19:06 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:06 np0005464214 podman[152914]: 2025-10-01 13:19:06.280835965 +0000 UTC m=+0.022896748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:19:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:06 np0005464214 podman[152914]: 2025-10-01 13:19:06.395518655 +0000 UTC m=+0.137579428 container init e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:19:06 np0005464214 podman[152914]: 2025-10-01 13:19:06.411211487 +0000 UTC m=+0.153272260 container start e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:19:06 np0005464214 podman[152914]: 2025-10-01 13:19:06.415277684 +0000 UTC m=+0.157338447 container attach e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:19:06 np0005464214 systemd[1]: session-46.scope: Deactivated successfully.
Oct  1 09:19:06 np0005464214 systemd[1]: session-46.scope: Consumed 1min 2.620s CPU time.
Oct  1 09:19:06 np0005464214 systemd-logind[818]: Session 46 logged out. Waiting for processes to exit.
Oct  1 09:19:06 np0005464214 systemd-logind[818]: Removed session 46.
Oct  1 09:19:07 np0005464214 sharp_haslett[152930]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:19:07 np0005464214 sharp_haslett[152930]: --> relative data size: 1.0
Oct  1 09:19:07 np0005464214 sharp_haslett[152930]: --> All data devices are unavailable
Oct  1 09:19:07 np0005464214 systemd[1]: libpod-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope: Deactivated successfully.
Oct  1 09:19:07 np0005464214 systemd[1]: libpod-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope: Consumed 1.099s CPU time.
Oct  1 09:19:07 np0005464214 podman[152959]: 2025-10-01 13:19:07.600321239 +0000 UTC m=+0.032539430 container died e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:19:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953-merged.mount: Deactivated successfully.
Oct  1 09:19:07 np0005464214 podman[152959]: 2025-10-01 13:19:07.660635317 +0000 UTC m=+0.092853498 container remove e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:19:07 np0005464214 systemd[1]: libpod-conmon-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope: Deactivated successfully.
Oct  1 09:19:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.454717471 +0000 UTC m=+0.046748156 container create 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:19:08 np0005464214 systemd[1]: Started libpod-conmon-006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b.scope.
Oct  1 09:19:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.434361453 +0000 UTC m=+0.026392178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.529933906 +0000 UTC m=+0.121964591 container init 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.537896025 +0000 UTC m=+0.129926740 container start 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.541971003 +0000 UTC m=+0.134001708 container attach 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:19:08 np0005464214 wizardly_swirles[153132]: 167 167
Oct  1 09:19:08 np0005464214 systemd[1]: libpod-006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b.scope: Deactivated successfully.
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.543925324 +0000 UTC m=+0.135956029 container died 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:19:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f75ca87cf9737f1571aef45ca8eeb4043542f4a63a78b58a3c691e2f2e6b2c9c-merged.mount: Deactivated successfully.
Oct  1 09:19:08 np0005464214 podman[153116]: 2025-10-01 13:19:08.591642258 +0000 UTC m=+0.183672993 container remove 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:19:08 np0005464214 systemd[1]: libpod-conmon-006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b.scope: Deactivated successfully.
Oct  1 09:19:08 np0005464214 podman[153156]: 2025-10-01 13:19:08.790112242 +0000 UTC m=+0.051337569 container create 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:19:08 np0005464214 systemd[1]: Started libpod-conmon-6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17.scope.
Oct  1 09:19:08 np0005464214 podman[153156]: 2025-10-01 13:19:08.760389442 +0000 UTC m=+0.021614849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:19:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:08 np0005464214 podman[153156]: 2025-10-01 13:19:08.882285458 +0000 UTC m=+0.143510875 container init 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:19:08 np0005464214 podman[153156]: 2025-10-01 13:19:08.894677686 +0000 UTC m=+0.155903043 container start 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:19:08 np0005464214 podman[153156]: 2025-10-01 13:19:08.899081024 +0000 UTC m=+0.160306441 container attach 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]: {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:    "0": [
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:        {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "devices": [
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "/dev/loop3"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            ],
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_name": "ceph_lv0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_size": "21470642176",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "name": "ceph_lv0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "tags": {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cluster_name": "ceph",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.crush_device_class": "",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.encrypted": "0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osd_id": "0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.type": "block",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.vdo": "0"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            },
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "type": "block",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "vg_name": "ceph_vg0"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:        }
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:    ],
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:    "1": [
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:        {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "devices": [
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "/dev/loop4"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            ],
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_name": "ceph_lv1",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_size": "21470642176",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "name": "ceph_lv1",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "tags": {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cluster_name": "ceph",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.crush_device_class": "",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.encrypted": "0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osd_id": "1",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.type": "block",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.vdo": "0"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            },
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "type": "block",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "vg_name": "ceph_vg1"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:        }
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:    ],
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:    "2": [
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:        {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "devices": [
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "/dev/loop5"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            ],
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_name": "ceph_lv2",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_size": "21470642176",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "name": "ceph_lv2",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "tags": {
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.cluster_name": "ceph",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.crush_device_class": "",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.encrypted": "0",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osd_id": "2",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.type": "block",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:                "ceph.vdo": "0"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            },
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "type": "block",
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:            "vg_name": "ceph_vg2"
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:        }
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]:    ]
Oct  1 09:19:09 np0005464214 inspiring_ramanujan[153173]: }
Oct  1 09:19:09 np0005464214 systemd[1]: libpod-6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17.scope: Deactivated successfully.
Oct  1 09:19:09 np0005464214 podman[153156]: 2025-10-01 13:19:09.654355942 +0000 UTC m=+0.915581299 container died 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:19:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718-merged.mount: Deactivated successfully.
Oct  1 09:19:09 np0005464214 podman[153156]: 2025-10-01 13:19:09.727149641 +0000 UTC m=+0.988374968 container remove 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:19:09 np0005464214 systemd[1]: libpod-conmon-6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17.scope: Deactivated successfully.
Oct  1 09:19:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.306352666 +0000 UTC m=+0.036495944 container create 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:19:10 np0005464214 systemd[1]: Started libpod-conmon-456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5.scope.
Oct  1 09:19:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.370355051 +0000 UTC m=+0.100498339 container init 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.381568162 +0000 UTC m=+0.111711440 container start 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.384993498 +0000 UTC m=+0.115136786 container attach 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:19:10 np0005464214 intelligent_stonebraker[153353]: 167 167
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.290276333 +0000 UTC m=+0.020419621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:19:10 np0005464214 systemd[1]: libpod-456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5.scope: Deactivated successfully.
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.387562749 +0000 UTC m=+0.117706057 container died 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:19:10 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a69f6547bb4e9fe1169109da329509b67591caf573217383f2ab1061c8bb5eeb-merged.mount: Deactivated successfully.
Oct  1 09:19:10 np0005464214 podman[153336]: 2025-10-01 13:19:10.426597271 +0000 UTC m=+0.156740559 container remove 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:19:10 np0005464214 systemd[1]: libpod-conmon-456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5.scope: Deactivated successfully.
Oct  1 09:19:10 np0005464214 podman[153378]: 2025-10-01 13:19:10.594856629 +0000 UTC m=+0.040927022 container create b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:19:10 np0005464214 systemd[1]: Started libpod-conmon-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope.
Oct  1 09:19:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:19:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:19:10 np0005464214 podman[153378]: 2025-10-01 13:19:10.574788921 +0000 UTC m=+0.020859294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:19:10 np0005464214 podman[153378]: 2025-10-01 13:19:10.681867284 +0000 UTC m=+0.127937717 container init b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:19:10 np0005464214 podman[153378]: 2025-10-01 13:19:10.69930492 +0000 UTC m=+0.145375263 container start b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:19:10 np0005464214 podman[153378]: 2025-10-01 13:19:10.702755198 +0000 UTC m=+0.148825561 container attach b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:19:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:11 np0005464214 zealous_jang[153395]: {
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "osd_id": 0,
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "type": "bluestore"
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:    },
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "osd_id": 2,
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "type": "bluestore"
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:    },
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "osd_id": 1,
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:        "type": "bluestore"
Oct  1 09:19:11 np0005464214 zealous_jang[153395]:    }
Oct  1 09:19:11 np0005464214 zealous_jang[153395]: }
Oct  1 09:19:11 np0005464214 systemd[1]: libpod-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope: Deactivated successfully.
Oct  1 09:19:11 np0005464214 systemd[1]: libpod-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope: Consumed 1.081s CPU time.
Oct  1 09:19:11 np0005464214 podman[153378]: 2025-10-01 13:19:11.774200226 +0000 UTC m=+1.220270649 container died b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:19:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e-merged.mount: Deactivated successfully.
Oct  1 09:19:11 np0005464214 podman[153378]: 2025-10-01 13:19:11.834342569 +0000 UTC m=+1.280412912 container remove b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:19:11 np0005464214 systemd[1]: libpod-conmon-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope: Deactivated successfully.
Oct  1 09:19:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:19:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:19:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:19:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:19:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b2e848ff-da64-45d2-96fe-b6d5c03e9fea does not exist
Oct  1 09:19:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 16fbf88a-310f-4c1f-9d06-9e2e5cbb8293 does not exist
Oct  1 09:19:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:12 np0005464214 systemd-logind[818]: New session 48 of user zuul.
Oct  1 09:19:12 np0005464214 systemd[1]: Started Session 48 of User zuul.
Oct  1 09:19:12 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:19:12 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:19:13 np0005464214 python3.9[153645]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:19:13 np0005464214 systemd[1]: Stopping User Manager for UID 0...
Oct  1 09:19:13 np0005464214 systemd[152019]: Activating special unit Exit the Session...
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped target Main User Target.
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped target Basic System.
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped target Paths.
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped target Sockets.
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped target Timers.
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  1 09:19:13 np0005464214 systemd[152019]: Closed D-Bus User Message Bus Socket.
Oct  1 09:19:13 np0005464214 systemd[152019]: Stopped Create User's Volatile Files and Directories.
Oct  1 09:19:13 np0005464214 systemd[152019]: Removed slice User Application Slice.
Oct  1 09:19:13 np0005464214 systemd[152019]: Reached target Shutdown.
Oct  1 09:19:13 np0005464214 systemd[152019]: Finished Exit the Session.
Oct  1 09:19:13 np0005464214 systemd[152019]: Reached target Exit the Session.
Oct  1 09:19:13 np0005464214 systemd[1]: user@0.service: Deactivated successfully.
Oct  1 09:19:13 np0005464214 systemd[1]: Stopped User Manager for UID 0.
Oct  1 09:19:13 np0005464214 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  1 09:19:13 np0005464214 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  1 09:19:13 np0005464214 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  1 09:19:13 np0005464214 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  1 09:19:13 np0005464214 systemd[1]: Removed slice User Slice of UID 0.
Oct  1 09:19:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:14 np0005464214 python3.9[153803]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:15 np0005464214 python3.9[153955]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:15 np0005464214 python3.9[154107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:16 np0005464214 python3.9[154259]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:17 np0005464214 python3.9[154411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:19:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:18 np0005464214 python3.9[154561]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:19:19 np0005464214 python3.9[154713]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  1 09:19:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:20 np0005464214 python3.9[154863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:21 np0005464214 python3.9[154984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324759.9217083-86-212900004093909/.source follow=False _original_basename=haproxy.j2 checksum=3032b37a17ecbb7a27e901a243b96261ef70a559 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:22 np0005464214 python3.9[155135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:22 np0005464214 python3.9[155256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324761.5796463-101-31830838153653/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:23 np0005464214 python3.9[155408]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:19:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:24 np0005464214 python3.9[155492]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:19:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:26 np0005464214 python3.9[155645]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:19:27 np0005464214 python3.9[155798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:28 np0005464214 python3.9[155919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324767.0593774-138-92313042679548/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:28 np0005464214 python3.9[156069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:29 np0005464214 python3.9[156192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324768.370188-138-268189513629070/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:30 np0005464214 python3.9[156342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:31 np0005464214 python3.9[156463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324770.268938-182-106148326016263/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:32 np0005464214 python3.9[156613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:32 np0005464214 python3.9[156734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324771.6564763-182-139132528206125/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:33 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:33Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Oct  1 09:19:33 np0005464214 ovn_controller[151996]: 2025-10-01T13:19:33Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct  1 09:19:33 np0005464214 podman[156858]: 2025-10-01 13:19:33.245859375 +0000 UTC m=+0.136842225 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  1 09:19:33 np0005464214 python3.9[156900]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:19:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:34 np0005464214 python3.9[157065]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:34 np0005464214 python3.9[157217]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:35 np0005464214 python3.9[157297]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:35 np0005464214 python3.9[157449]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:36 np0005464214 python3.9[157527]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:37 np0005464214 python3.9[157679]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:37 np0005464214 python3.9[157831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:38 np0005464214 python3.9[157909]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:39 np0005464214 python3.9[158061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:39 np0005464214 python3.9[158139]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:40 np0005464214 python3.9[158291]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:19:40 np0005464214 systemd[1]: Reloading.
Oct  1 09:19:40 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:19:40 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:19:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:41 np0005464214 python3.9[158484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:41 np0005464214 python3.9[158562]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:42 np0005464214 python3.9[158714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:43 np0005464214 python3.9[158794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:44 np0005464214 python3.9[158946]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:19:44 np0005464214 systemd[1]: Reloading.
Oct  1 09:19:44 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:19:44 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:19:44 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:19:44 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:19:44 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:19:44 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:19:45 np0005464214 python3.9[159139]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:46 np0005464214 python3.9[159291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:46 np0005464214 python3.9[159414]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324785.4708555-333-269869075503420/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:19:47
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms']
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:19:47 np0005464214 python3.9[159566]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:19:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:48 np0005464214 python3.9[159718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:19:49 np0005464214 python3.9[159841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324787.9780657-358-80100632460010/.source.json _original_basename=.6hxcsszs follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:49 np0005464214 python3.9[159993]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:19:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:50 np0005464214 auditd[705]: Audit daemon rotating log files
Oct  1 09:19:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:52 np0005464214 python3.9[160420]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  1 09:19:53 np0005464214 python3.9[160572]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 09:19:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:54 np0005464214 python3.9[160724]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 09:19:55 np0005464214 python3[160903]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 09:19:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:19:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5538 writes, 23K keys, 5538 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5538 writes, 846 syncs, 6.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5538 writes, 23K keys, 5538 commit groups, 1.0 writes per commit group, ingest: 18.76 MB, 0.03 MB/s#012Interval WAL: 5538 writes, 846 syncs, 6.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 09:19:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:19:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:19:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:19:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:20:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6794 writes, 28K keys, 6794 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6794 writes, 1230 syncs, 5.52 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6794 writes, 28K keys, 6794 commit groups, 1.0 writes per commit group, ingest: 19.73 MB, 0.03 MB/s#012Interval WAL: 6794 writes, 1230 syncs, 5.52 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  1 09:20:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:04 np0005464214 podman[161004]: 2025-10-01 13:20:04.074485969 +0000 UTC m=+0.629465622 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  1 09:20:04 np0005464214 podman[160917]: 2025-10-01 13:20:04.41666456 +0000 UTC m=+8.808708080 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab
Oct  1 09:20:04 np0005464214 podman[161070]: 2025-10-01 13:20:04.549599274 +0000 UTC m=+0.021697571 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab
Oct  1 09:20:05 np0005464214 podman[161070]: 2025-10-01 13:20:05.138150761 +0000 UTC m=+0.610249058 container create dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 09:20:05 np0005464214 python3[160903]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab
Oct  1 09:20:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:20:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5455 writes, 785 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s#012Interval WAL: 5455 writes, 785 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 09:20:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:05 np0005464214 python3.9[161260]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:20:06 np0005464214 python3.9[161414]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:07 np0005464214 python3.9[161490]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:20:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 09:20:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:07 np0005464214 python3.9[161641]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324807.2111309-446-137959937942174/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:08 np0005464214 python3.9[161717]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:20:08 np0005464214 systemd[1]: Reloading.
Oct  1 09:20:08 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:20:08 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:20:09 np0005464214 python3.9[161828]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:09 np0005464214 systemd[1]: Reloading.
Oct  1 09:20:09 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:20:09 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:20:09 np0005464214 systemd[1]: Starting ovn_metadata_agent container...
Oct  1 09:20:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acb41f7114ab618d63698fd674156b117e354f2ee6c45c2ffe9ed7a83f99763/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acb41f7114ab618d63698fd674156b117e354f2ee6c45c2ffe9ed7a83f99763/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:10 np0005464214 systemd[1]: Started /usr/bin/podman healthcheck run dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9.
Oct  1 09:20:10 np0005464214 podman[161869]: 2025-10-01 13:20:10.01255315 +0000 UTC m=+0.185837647 container init dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + sudo -E kolla_set_configs
Oct  1 09:20:10 np0005464214 podman[161869]: 2025-10-01 13:20:10.05304507 +0000 UTC m=+0.226329577 container start dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:20:10 np0005464214 edpm-start-podman-container[161869]: ovn_metadata_agent
Oct  1 09:20:10 np0005464214 podman[161892]: 2025-10-01 13:20:10.14959121 +0000 UTC m=+0.075693327 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:20:10 np0005464214 edpm-start-podman-container[161868]: Creating additional drop-in dependency for "ovn_metadata_agent" (dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9)
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:20:10 np0005464214 systemd[1]: Reloading.
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Validating config file
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Copying service configuration files
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Writing out command to execute
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: ++ cat /run_command
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + CMD=neutron-ovn-metadata-agent
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + ARGS=
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + sudo kolla_copy_cacerts
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + [[ ! -n '' ]]
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + . kolla_extend_start
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: Running command: 'neutron-ovn-metadata-agent'
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + umask 0022
Oct  1 09:20:10 np0005464214 ovn_metadata_agent[161885]: + exec neutron-ovn-metadata-agent
Oct  1 09:20:10 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:20:10 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:20:10 np0005464214 systemd[1]: Started ovn_metadata_agent container.
Oct  1 09:20:10 np0005464214 systemd[1]: session-48.scope: Deactivated successfully.
Oct  1 09:20:10 np0005464214 systemd[1]: session-48.scope: Consumed 56.529s CPU time.
Oct  1 09:20:10 np0005464214 systemd-logind[818]: Session 48 logged out. Waiting for processes to exit.
Oct  1 09:20:10 np0005464214 systemd-logind[818]: Removed session 48.
Oct  1 09:20:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.249 161890 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.249 161890 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.249 161890 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.283 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.291 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.291 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.291 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.292 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.292 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.304 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7280030e-2ba6-406c-9fae-f8284a927c47 (UUID: 7280030e-2ba6-406c-9fae-f8284a927c47) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.344 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.355 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.398 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7280030e-2ba6-406c-9fae-f8284a927c47'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f240fd97850>], external_ids={}, name=7280030e-2ba6-406c-9fae-f8284a927c47, nb_cfg_timestamp=1759324751490, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.399 161890 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f240fd3f310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.400 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.400 161890 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.401 161890 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.401 161890 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.405 161890 DEBUG oslo_service.service [-] Started child 162099 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.408 161890 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpoikdb4t9/privsep.sock']#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.410 162099 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-166439'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.446 162099 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.447 162099 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.447 162099 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.452 162099 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.461 162099 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  1 09:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.469 162099 INFO eventlet.wsgi.server [-] (162099) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  1 09:20:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:20:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:20:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:12 np0005464214 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.092 161890 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.093 161890 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpoikdb4t9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.974 162238 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.978 162238 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.980 162238 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.981 162238 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162238#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.096 162238 DEBUG oslo.privsep.daemon [-] privsep: reply[c26d54ba-75d6-4be4-bcf0-79595e75c21e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:13 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3b98bf2d-d8d8-4704-91cc-6987f33f3114 does not exist
Oct  1 09:20:13 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 32f51114-f71f-486d-8271-3656aa9ab662 does not exist
Oct  1 09:20:13 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8acad495-b47b-4425-8211-e252f93d1248 does not exist
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.550 162238 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.551 162238 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:20:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.551 162238 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:20:13 np0005464214 podman[162403]: 2025-10-01 13:20:13.944293212 +0000 UTC m=+0.049414016 container create 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:20:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:13 np0005464214 systemd[1]: Started libpod-conmon-7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21.scope.
Oct  1 09:20:14 np0005464214 podman[162403]: 2025-10-01 13:20:13.917236267 +0000 UTC m=+0.022357111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:20:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:14 np0005464214 podman[162403]: 2025-10-01 13:20:14.032944418 +0000 UTC m=+0.138065382 container init 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.044 162238 DEBUG oslo.privsep.daemon [-] privsep: reply[d681bf7d-4f9d-43de-a8c8-0b9cfdd65350]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.047 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, column=external_ids, values=({'neutron:ovn-metadata-id': 'dd134fee-c268-55e9-81d6-d964cb333c5f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:20:14 np0005464214 podman[162403]: 2025-10-01 13:20:14.047618611 +0000 UTC m=+0.152739385 container start 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:20:14 np0005464214 podman[162403]: 2025-10-01 13:20:14.052139371 +0000 UTC m=+0.157260185 container attach 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:20:14 np0005464214 happy_swirles[162419]: 167 167
Oct  1 09:20:14 np0005464214 systemd[1]: libpod-7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21.scope: Deactivated successfully.
Oct  1 09:20:14 np0005464214 podman[162403]: 2025-10-01 13:20:14.056026731 +0000 UTC m=+0.161147495 container died 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.057 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:20:14 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 09:20:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f0527201647add611fc4b9dac242220a5b233c82e6bfca4cf6c42a17e8c8a7bf-merged.mount: Deactivated successfully.
Oct  1 09:20:14 np0005464214 podman[162403]: 2025-10-01 13:20:14.116901019 +0000 UTC m=+0.222021783 container remove 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:20:14 np0005464214 systemd[1]: libpod-conmon-7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21.scope: Deactivated successfully.
Oct  1 09:20:14 np0005464214 podman[162442]: 2025-10-01 13:20:14.327108318 +0000 UTC m=+0.066425261 container create 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:20:14 np0005464214 systemd[1]: Started libpod-conmon-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope.
Oct  1 09:20:14 np0005464214 podman[162442]: 2025-10-01 13:20:14.300785526 +0000 UTC m=+0.040102489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:20:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:14 np0005464214 podman[162442]: 2025-10-01 13:20:14.442236602 +0000 UTC m=+0.181553625 container init 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:20:14 np0005464214 podman[162442]: 2025-10-01 13:20:14.458861185 +0000 UTC m=+0.198178098 container start 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:20:14 np0005464214 podman[162442]: 2025-10-01 13:20:14.462790287 +0000 UTC m=+0.202107250 container attach 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:20:15 np0005464214 nifty_leavitt[162458]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:20:15 np0005464214 nifty_leavitt[162458]: --> relative data size: 1.0
Oct  1 09:20:15 np0005464214 nifty_leavitt[162458]: --> All data devices are unavailable
Oct  1 09:20:15 np0005464214 systemd[1]: libpod-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope: Deactivated successfully.
Oct  1 09:20:15 np0005464214 systemd[1]: libpod-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope: Consumed 1.103s CPU time.
Oct  1 09:20:15 np0005464214 podman[162442]: 2025-10-01 13:20:15.615045333 +0000 UTC m=+1.354362286 container died 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:20:15 np0005464214 systemd-logind[818]: New session 49 of user zuul.
Oct  1 09:20:15 np0005464214 systemd[1]: Started Session 49 of User zuul.
Oct  1 09:20:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44-merged.mount: Deactivated successfully.
Oct  1 09:20:16 np0005464214 podman[162442]: 2025-10-01 13:20:16.5651563 +0000 UTC m=+2.304473213 container remove 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:20:16 np0005464214 systemd[1]: libpod-conmon-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope: Deactivated successfully.
Oct  1 09:20:16 np0005464214 python3.9[162652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.122214415 +0000 UTC m=+0.021122623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.302703646 +0000 UTC m=+0.201611834 container create 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:20:17 np0005464214 systemd[1]: Started libpod-conmon-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope.
Oct  1 09:20:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.394598734 +0000 UTC m=+0.293506952 container init 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.405145003 +0000 UTC m=+0.304053191 container start 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:20:17 np0005464214 heuristic_goodall[162889]: 167 167
Oct  1 09:20:17 np0005464214 systemd[1]: libpod-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope: Deactivated successfully.
Oct  1 09:20:17 np0005464214 conmon[162889]: conmon 2969e221d0b4507af23a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope/container/memory.events
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.527618903 +0000 UTC m=+0.426527111 container attach 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.529508821 +0000 UTC m=+0.428417059 container died 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:20:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9f14d0553f23a9f47728ad0cc47b9da734a8c3f57be8a80cf84adafc4bbbc4c1-merged.mount: Deactivated successfully.
Oct  1 09:20:17 np0005464214 podman[162809]: 2025-10-01 13:20:17.658895378 +0000 UTC m=+0.557803606 container remove 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:20:17 np0005464214 systemd[1]: libpod-conmon-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope: Deactivated successfully.
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:20:17 np0005464214 podman[162989]: 2025-10-01 13:20:17.853906897 +0000 UTC m=+0.048942262 container create 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:20:17 np0005464214 systemd[1]: Started libpod-conmon-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope.
Oct  1 09:20:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:17 np0005464214 podman[162989]: 2025-10-01 13:20:17.833129087 +0000 UTC m=+0.028164502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:20:17 np0005464214 podman[162989]: 2025-10-01 13:20:17.940517855 +0000 UTC m=+0.135553230 container init 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:20:17 np0005464214 python3.9[162983]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:17 np0005464214 podman[162989]: 2025-10-01 13:20:17.948439323 +0000 UTC m=+0.143474688 container start 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:20:17 np0005464214 podman[162989]: 2025-10-01 13:20:17.955840534 +0000 UTC m=+0.150875919 container attach 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:20:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]: {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:    "0": [
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:        {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "devices": [
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "/dev/loop3"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            ],
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_name": "ceph_lv0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_size": "21470642176",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "name": "ceph_lv0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "tags": {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cluster_name": "ceph",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.crush_device_class": "",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.encrypted": "0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osd_id": "0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.type": "block",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.vdo": "0"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            },
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "type": "block",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "vg_name": "ceph_vg0"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:        }
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:    ],
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:    "1": [
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:        {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "devices": [
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "/dev/loop4"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            ],
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_name": "ceph_lv1",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_size": "21470642176",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "name": "ceph_lv1",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "tags": {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cluster_name": "ceph",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.crush_device_class": "",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.encrypted": "0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osd_id": "1",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.type": "block",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.vdo": "0"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            },
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "type": "block",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "vg_name": "ceph_vg1"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:        }
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:    ],
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:    "2": [
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:        {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "devices": [
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "/dev/loop5"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            ],
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_name": "ceph_lv2",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_size": "21470642176",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "name": "ceph_lv2",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "tags": {
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.cluster_name": "ceph",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.crush_device_class": "",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.encrypted": "0",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osd_id": "2",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.type": "block",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:                "ceph.vdo": "0"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            },
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "type": "block",
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:            "vg_name": "ceph_vg2"
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:        }
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]:    ]
Oct  1 09:20:18 np0005464214 intelligent_jackson[163006]: }
Oct  1 09:20:18 np0005464214 systemd[1]: libpod-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope: Deactivated successfully.
Oct  1 09:20:18 np0005464214 conmon[163006]: conmon 3baa4eea6fc5fc372dbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope/container/memory.events
Oct  1 09:20:18 np0005464214 podman[162989]: 2025-10-01 13:20:18.679794803 +0000 UTC m=+0.874830198 container died 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:20:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146-merged.mount: Deactivated successfully.
Oct  1 09:20:18 np0005464214 podman[162989]: 2025-10-01 13:20:18.734235455 +0000 UTC m=+0.929270820 container remove 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:20:18 np0005464214 systemd[1]: libpod-conmon-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope: Deactivated successfully.
Oct  1 09:20:19 np0005464214 python3.9[163215]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:20:19 np0005464214 systemd[1]: Reloading.
Oct  1 09:20:19 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:20:19 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.465210004 +0000 UTC m=+0.055800236 container create 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:20:19 np0005464214 systemd[1]: Started libpod-conmon-305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614.scope.
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.433370207 +0000 UTC m=+0.023960529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:20:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.587241129 +0000 UTC m=+0.177831411 container init 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.598941615 +0000 UTC m=+0.189531847 container start 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.603145497 +0000 UTC m=+0.193735779 container attach 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:20:19 np0005464214 busy_austin[163379]: 167 167
Oct  1 09:20:19 np0005464214 systemd[1]: libpod-305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614.scope: Deactivated successfully.
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.607324647 +0000 UTC m=+0.197914879 container died 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:20:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b1e2e583f055cdb6ee566e0e441e10ef2ee68fb97e6e49d2c6c44f92f823b073-merged.mount: Deactivated successfully.
Oct  1 09:20:19 np0005464214 podman[163363]: 2025-10-01 13:20:19.642793757 +0000 UTC m=+0.233383989 container remove 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:20:19 np0005464214 systemd[1]: libpod-conmon-305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614.scope: Deactivated successfully.
Oct  1 09:20:19 np0005464214 podman[163463]: 2025-10-01 13:20:19.83926571 +0000 UTC m=+0.068736490 container create 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:20:19 np0005464214 systemd[1]: Started libpod-conmon-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope.
Oct  1 09:20:19 np0005464214 podman[163463]: 2025-10-01 13:20:19.813545307 +0000 UTC m=+0.043016157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:20:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:20:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:20:19 np0005464214 podman[163463]: 2025-10-01 13:20:19.953193794 +0000 UTC m=+0.182664604 container init 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:20:19 np0005464214 podman[163463]: 2025-10-01 13:20:19.967722547 +0000 UTC m=+0.197193327 container start 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:20:19 np0005464214 podman[163463]: 2025-10-01 13:20:19.971798505 +0000 UTC m=+0.201269305 container attach 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 09:20:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:20 np0005464214 python3.9[163573]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:20:20 np0005464214 network[163590]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:20:20 np0005464214 network[163591]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:20:20 np0005464214 network[163592]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:20:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]: {
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "osd_id": 0,
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "type": "bluestore"
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:    },
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "osd_id": 2,
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "type": "bluestore"
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:    },
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "osd_id": 1,
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:        "type": "bluestore"
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]:    }
Oct  1 09:20:21 np0005464214 priceless_zhukovsky[163495]: }
Oct  1 09:20:21 np0005464214 podman[163463]: 2025-10-01 13:20:21.069601245 +0000 UTC m=+1.299072025 container died 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:20:21 np0005464214 systemd[1]: libpod-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope: Deactivated successfully.
Oct  1 09:20:21 np0005464214 systemd[1]: libpod-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope: Consumed 1.103s CPU time.
Oct  1 09:20:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2-merged.mount: Deactivated successfully.
Oct  1 09:20:21 np0005464214 podman[163463]: 2025-10-01 13:20:21.258023417 +0000 UTC m=+1.487494197 container remove 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:20:21 np0005464214 systemd[1]: libpod-conmon-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope: Deactivated successfully.
Oct  1 09:20:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:20:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:20:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f5ac27ee-6119-49e7-9f4e-a2f5654476ef does not exist
Oct  1 09:20:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c3e16c15-5bdc-4ca7-ad9f-13afb20689df does not exist
Oct  1 09:20:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:20:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:25 np0005464214 python3.9[163949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:26 np0005464214 python3.9[164102]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:27 np0005464214 python3.9[164255]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:28 np0005464214 python3.9[164408]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:28 np0005464214 python3.9[164561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:29 np0005464214 python3.9[164714]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:30 np0005464214 python3.9[164867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:20:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:31 np0005464214 python3.9[165020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:32 np0005464214 python3.9[165172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:32 np0005464214 python3.9[165324]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:33 np0005464214 python3.9[165476]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:33 np0005464214 python3.9[165628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:34 np0005464214 podman[165752]: 2025-10-01 13:20:34.491184746 +0000 UTC m=+0.106241944 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:20:34 np0005464214 python3.9[165798]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:35 np0005464214 python3.9[165960]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:35 np0005464214 python3.9[166112]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:36 np0005464214 python3.9[166264]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:37 np0005464214 python3.9[166416]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:37 np0005464214 python3.9[166568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:38 np0005464214 python3.9[166720]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:39 np0005464214 python3.9[166872]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:39 np0005464214 python3.9[167024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:20:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:40 np0005464214 podman[167176]: 2025-10-01 13:20:40.310151351 +0000 UTC m=+0.066797060 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 09:20:40 np0005464214 python3.9[167177]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:41 np0005464214 python3.9[167350]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 09:20:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:42 np0005464214 python3.9[167502]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:20:42 np0005464214 systemd[1]: Reloading.
Oct  1 09:20:42 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:20:42 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:20:42 np0005464214 python3.9[167692]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:43 np0005464214 python3.9[167845]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:44 np0005464214 python3.9[167998]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:44 np0005464214 python3.9[168153]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:45 np0005464214 python3.9[168306]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:46 np0005464214 python3.9[168459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:47 np0005464214 python3.9[168612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:20:47
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr']
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:20:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:48 np0005464214 python3.9[168765]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  1 09:20:48 np0005464214 python3.9[168918]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 09:20:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:50 np0005464214 python3.9[169076]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 09:20:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:51 np0005464214 python3.9[169236]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:20:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:52 np0005464214 python3.9[169320]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:20:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:20:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:20:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:20:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:05 np0005464214 podman[169456]: 2025-10-01 13:21:05.58516991 +0000 UTC m=+0.137162651 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS)
Oct  1 09:21:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct  1 09:21:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct  1 09:21:10 np0005464214 podman[169533]: 2025-10-01 13:21:10.509450918 +0000 UTC m=+0.063379193 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct  1 09:21:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct  1 09:21:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:21:12.284 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:21:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:21:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:21:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:21:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:21:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct  1 09:21:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:21:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:21:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct  1 09:21:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct  1 09:21:22 np0005464214 podman[169730]: 2025-10-01 13:21:22.744887316 +0000 UTC m=+0.466658492 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:23 np0005464214 podman[169730]: 2025-10-01 13:21:23.146234644 +0000 UTC m=+0.868005770 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:21:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:21:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:21:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6ce6cd43-fe6b-412f-ad00-3e0a0afe6fe5 does not exist
Oct  1 09:21:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 33e59683-38e4-46c3-aab5-f05e03cd004c does not exist
Oct  1 09:21:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0e499d79-b566-498d-b66b-45c0a8f461e0 does not exist
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.371578735 +0000 UTC m=+0.037656969 container create ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:25 np0005464214 systemd[1]: Started libpod-conmon-ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2.scope.
Oct  1 09:21:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.354651659 +0000 UTC m=+0.020729923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.457753854 +0000 UTC m=+0.123832098 container init ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.46387765 +0000 UTC m=+0.129955894 container start ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.469336756 +0000 UTC m=+0.135415020 container attach ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:21:25 np0005464214 zen_allen[170173]: 167 167
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.472155212 +0000 UTC m=+0.138233446 container died ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:21:25 np0005464214 systemd[1]: libpod-ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2.scope: Deactivated successfully.
Oct  1 09:21:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b8349ccc87ce1421157b481f70852ed33219650095a40f5ba3763336901207dc-merged.mount: Deactivated successfully.
Oct  1 09:21:25 np0005464214 podman[170156]: 2025-10-01 13:21:25.534905396 +0000 UTC m=+0.200983630 container remove ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:21:25 np0005464214 systemd[1]: libpod-conmon-ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2.scope: Deactivated successfully.
Oct  1 09:21:25 np0005464214 podman[170199]: 2025-10-01 13:21:25.744811997 +0000 UTC m=+0.064233950 container create 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:21:25 np0005464214 podman[170199]: 2025-10-01 13:21:25.710577343 +0000 UTC m=+0.029999356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:21:25 np0005464214 systemd[1]: Started libpod-conmon-0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473.scope.
Oct  1 09:21:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:21:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:25 np0005464214 podman[170199]: 2025-10-01 13:21:25.90659184 +0000 UTC m=+0.226013803 container init 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:25 np0005464214 podman[170199]: 2025-10-01 13:21:25.912685076 +0000 UTC m=+0.232106989 container start 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:21:25 np0005464214 podman[170199]: 2025-10-01 13:21:25.916129992 +0000 UTC m=+0.235552005 container attach 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:21:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct  1 09:21:26 np0005464214 nice_edison[170216]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:21:26 np0005464214 nice_edison[170216]: --> relative data size: 1.0
Oct  1 09:21:26 np0005464214 nice_edison[170216]: --> All data devices are unavailable
Oct  1 09:21:26 np0005464214 systemd[1]: libpod-0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473.scope: Deactivated successfully.
Oct  1 09:21:26 np0005464214 podman[170199]: 2025-10-01 13:21:26.930597858 +0000 UTC m=+1.250019781 container died 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912-merged.mount: Deactivated successfully.
Oct  1 09:21:27 np0005464214 podman[170199]: 2025-10-01 13:21:27.015564828 +0000 UTC m=+1.334986741 container remove 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:21:27 np0005464214 systemd[1]: libpod-conmon-0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473.scope: Deactivated successfully.
Oct  1 09:21:27 np0005464214 podman[170395]: 2025-10-01 13:21:27.579827716 +0000 UTC m=+0.023161968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:21:27 np0005464214 podman[170395]: 2025-10-01 13:21:27.888011664 +0000 UTC m=+0.331345926 container create 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:21:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct  1 09:21:29 np0005464214 systemd[1]: Started libpod-conmon-04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e.scope.
Oct  1 09:21:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:21:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:30 np0005464214 podman[170395]: 2025-10-01 13:21:30.237008174 +0000 UTC m=+2.680342516 container init 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:21:30 np0005464214 podman[170395]: 2025-10-01 13:21:30.249566097 +0000 UTC m=+2.692900369 container start 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:21:30 np0005464214 sweet_bassi[170415]: 167 167
Oct  1 09:21:30 np0005464214 systemd[1]: libpod-04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e.scope: Deactivated successfully.
Oct  1 09:21:30 np0005464214 podman[170395]: 2025-10-01 13:21:30.261489671 +0000 UTC m=+2.704823913 container attach 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:21:30 np0005464214 podman[170395]: 2025-10-01 13:21:30.263363098 +0000 UTC m=+2.706697360 container died 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-af2579c222205e7844a7f0094f1e60865499f2a0b2ea24ef16584fd9b0a4743d-merged.mount: Deactivated successfully.
Oct  1 09:21:30 np0005464214 podman[170395]: 2025-10-01 13:21:30.36345695 +0000 UTC m=+2.806791212 container remove 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:21:30 np0005464214 systemd[1]: libpod-conmon-04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e.scope: Deactivated successfully.
Oct  1 09:21:30 np0005464214 kernel: SELinux:  Converting 2765 SID table entries...
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 09:21:30 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 09:21:30 np0005464214 podman[170439]: 2025-10-01 13:21:30.58154239 +0000 UTC m=+0.058239976 container create 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:21:30 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  1 09:21:30 np0005464214 systemd[1]: Started libpod-conmon-7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91.scope.
Oct  1 09:21:30 np0005464214 podman[170439]: 2025-10-01 13:21:30.554412233 +0000 UTC m=+0.031109829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:21:30 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:21:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:30 np0005464214 podman[170439]: 2025-10-01 13:21:30.726480261 +0000 UTC m=+0.203177887 container init 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:21:30 np0005464214 podman[170439]: 2025-10-01 13:21:30.735347741 +0000 UTC m=+0.212045327 container start 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:21:30 np0005464214 podman[170439]: 2025-10-01 13:21:30.740248441 +0000 UTC m=+0.216946027 container attach 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]: {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:    "0": [
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:        {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "devices": [
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "/dev/loop3"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            ],
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_name": "ceph_lv0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_size": "21470642176",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "name": "ceph_lv0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "tags": {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cluster_name": "ceph",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.crush_device_class": "",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.encrypted": "0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osd_id": "0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.type": "block",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.vdo": "0"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            },
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "type": "block",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "vg_name": "ceph_vg0"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:        }
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:    ],
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:    "1": [
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:        {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "devices": [
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "/dev/loop4"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            ],
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_name": "ceph_lv1",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_size": "21470642176",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "name": "ceph_lv1",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "tags": {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cluster_name": "ceph",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.crush_device_class": "",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.encrypted": "0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osd_id": "1",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.type": "block",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.vdo": "0"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            },
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "type": "block",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "vg_name": "ceph_vg1"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:        }
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:    ],
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:    "2": [
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:        {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "devices": [
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "/dev/loop5"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            ],
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_name": "ceph_lv2",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_size": "21470642176",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "name": "ceph_lv2",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "tags": {
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.cluster_name": "ceph",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.crush_device_class": "",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.encrypted": "0",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osd_id": "2",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.type": "block",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:                "ceph.vdo": "0"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            },
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "type": "block",
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:            "vg_name": "ceph_vg2"
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:        }
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]:    ]
Oct  1 09:21:31 np0005464214 suspicious_newton[170457]: }
Oct  1 09:21:31 np0005464214 systemd[1]: libpod-7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91.scope: Deactivated successfully.
Oct  1 09:21:31 np0005464214 podman[170439]: 2025-10-01 13:21:31.472702226 +0000 UTC m=+0.949399822 container died 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:21:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570-merged.mount: Deactivated successfully.
Oct  1 09:21:31 np0005464214 podman[170439]: 2025-10-01 13:21:31.531055046 +0000 UTC m=+1.007752622 container remove 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:21:31 np0005464214 systemd[1]: libpod-conmon-7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91.scope: Deactivated successfully.
Oct  1 09:21:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.184857343 +0000 UTC m=+0.046411437 container create 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:21:32 np0005464214 systemd[1]: Started libpod-conmon-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope.
Oct  1 09:21:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.257952112 +0000 UTC m=+0.119506236 container init 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.167269647 +0000 UTC m=+0.028823761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.269223776 +0000 UTC m=+0.130777940 container start 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:21:32 np0005464214 awesome_pasteur[170635]: 167 167
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.27394174 +0000 UTC m=+0.135495834 container attach 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:21:32 np0005464214 systemd[1]: libpod-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope: Deactivated successfully.
Oct  1 09:21:32 np0005464214 conmon[170635]: conmon 0ae273b94491c2280e9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope/container/memory.events
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.277429666 +0000 UTC m=+0.138983800 container died 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:21:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c7a1d9594200cfb2e69037b24d3540519cc1f089eb6a5d2374cea479ea6800d1-merged.mount: Deactivated successfully.
Oct  1 09:21:32 np0005464214 podman[170619]: 2025-10-01 13:21:32.317781316 +0000 UTC m=+0.179335420 container remove 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:21:32 np0005464214 systemd[1]: libpod-conmon-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope: Deactivated successfully.
Oct  1 09:21:32 np0005464214 podman[170659]: 2025-10-01 13:21:32.533722131 +0000 UTC m=+0.071461789 container create cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:21:32 np0005464214 podman[170659]: 2025-10-01 13:21:32.486101639 +0000 UTC m=+0.023841317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:21:32 np0005464214 systemd[1]: Started libpod-conmon-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope.
Oct  1 09:21:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:21:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:21:32 np0005464214 podman[170659]: 2025-10-01 13:21:32.647417958 +0000 UTC m=+0.185157676 container init cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:21:32 np0005464214 podman[170659]: 2025-10-01 13:21:32.653163463 +0000 UTC m=+0.190903151 container start cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:21:32 np0005464214 podman[170659]: 2025-10-01 13:21:32.659587199 +0000 UTC m=+0.197326917 container attach cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]: {
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "osd_id": 0,
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "type": "bluestore"
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:    },
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "osd_id": 2,
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "type": "bluestore"
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:    },
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "osd_id": 1,
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:        "type": "bluestore"
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]:    }
Oct  1 09:21:33 np0005464214 fervent_poincare[170676]: }
Oct  1 09:21:33 np0005464214 systemd[1]: libpod-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope: Deactivated successfully.
Oct  1 09:21:33 np0005464214 systemd[1]: libpod-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope: Consumed 1.038s CPU time.
Oct  1 09:21:33 np0005464214 podman[170709]: 2025-10-01 13:21:33.727356461 +0000 UTC m=+0.023411915 container died cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:21:33 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7-merged.mount: Deactivated successfully.
Oct  1 09:21:33 np0005464214 podman[170709]: 2025-10-01 13:21:33.797856341 +0000 UTC m=+0.093911805 container remove cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:21:33 np0005464214 systemd[1]: libpod-conmon-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope: Deactivated successfully.
Oct  1 09:21:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:21:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:21:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ca546af-3a9f-474a-aeb3-2cd533f639f9 does not exist
Oct  1 09:21:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f5b3ba6c-fb01-4cd1-bccb-b6789b3b0719 does not exist
Oct  1 09:21:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:21:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:36 np0005464214 podman[170774]: 2025-10-01 13:21:36.574779482 +0000 UTC m=+0.122609041 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:21:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:41 np0005464214 podman[170804]: 2025-10-01 13:21:41.522810729 +0000 UTC m=+0.077250257 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:21:41 np0005464214 kernel: SELinux:  Converting 2765 SID table entries...
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 09:21:41 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 09:21:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:21:47
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:21:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:21:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:21:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:21:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:07 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  1 09:22:07 np0005464214 podman[179868]: 2025-10-01 13:22:07.532525225 +0000 UTC m=+0.085593782 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  1 09:22:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:22:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:22:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:22:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:22:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:22:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:22:12 np0005464214 podman[183463]: 2025-10-01 13:22:12.486557085 +0000 UTC m=+0.047574212 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:22:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:22:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:31 np0005464214 kernel: SELinux:  Converting 2766 SID table entries...
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability network_peer_controls=1
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability open_perms=1
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability extended_socket_class=1
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability always_check_network=0
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  1 09:22:31 np0005464214 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  1 09:22:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:33 np0005464214 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct  1 09:22:33 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  1 09:22:33 np0005464214 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct  1 09:22:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:22:35 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f2f0e995-9a5f-4deb-a695-bb94464eeb69 does not exist
Oct  1 09:22:35 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c8a35c61-dc4c-4904-8559-584df1b5d10f does not exist
Oct  1 09:22:35 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4eb54050-b67c-46cc-be43-c417fc8c5886 does not exist
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:22:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:22:35 np0005464214 podman[187954]: 2025-10-01 13:22:35.828144024 +0000 UTC m=+0.048742890 container create 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:22:35 np0005464214 systemd[1]: Started libpod-conmon-86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4.scope.
Oct  1 09:22:35 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:22:35 np0005464214 podman[187954]: 2025-10-01 13:22:35.801954043 +0000 UTC m=+0.022552929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:22:35 np0005464214 podman[187954]: 2025-10-01 13:22:35.947523521 +0000 UTC m=+0.168122397 container init 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:22:35 np0005464214 podman[187954]: 2025-10-01 13:22:35.957802913 +0000 UTC m=+0.178401759 container start 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:22:35 np0005464214 gifted_goldstine[187969]: 167 167
Oct  1 09:22:35 np0005464214 systemd[1]: libpod-86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4.scope: Deactivated successfully.
Oct  1 09:22:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:36 np0005464214 podman[187954]: 2025-10-01 13:22:36.032896049 +0000 UTC m=+0.253494915 container attach 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:22:36 np0005464214 podman[187954]: 2025-10-01 13:22:36.033369274 +0000 UTC m=+0.253968130 container died 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:22:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-47f7ca6c32b1fa035030564905bdeded350af8b90f90fe853d4c59b271c6437a-merged.mount: Deactivated successfully.
Oct  1 09:22:36 np0005464214 podman[187954]: 2025-10-01 13:22:36.183498006 +0000 UTC m=+0.404096872 container remove 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:22:36 np0005464214 systemd[1]: libpod-conmon-86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4.scope: Deactivated successfully.
Oct  1 09:22:36 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:22:36 np0005464214 podman[187999]: 2025-10-01 13:22:36.378569587 +0000 UTC m=+0.079093823 container create af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:22:36 np0005464214 podman[187999]: 2025-10-01 13:22:36.322045703 +0000 UTC m=+0.022569979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:22:36 np0005464214 systemd[1]: Started libpod-conmon-af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7.scope.
Oct  1 09:22:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:22:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:36 np0005464214 podman[187999]: 2025-10-01 13:22:36.665050416 +0000 UTC m=+0.365574712 container init af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:22:36 np0005464214 podman[187999]: 2025-10-01 13:22:36.672451078 +0000 UTC m=+0.372975334 container start af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:22:36 np0005464214 podman[187999]: 2025-10-01 13:22:36.715002364 +0000 UTC m=+0.415526610 container attach af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.434195) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957434259, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3517307, "memory_usage": 3578616, "flush_reason": "Manual Compaction"}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957520517, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3442133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9794, "largest_seqno": 11836, "table_properties": {"data_size": 3432822, "index_size": 5933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17879, "raw_average_key_size": 19, "raw_value_size": 3414395, "raw_average_value_size": 3719, "num_data_blocks": 269, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324724, "oldest_key_time": 1759324724, "file_creation_time": 1759324957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 86362 microseconds, and 8598 cpu microseconds.
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.520559) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3442133 bytes OK
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.520589) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.522372) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.522385) EVENT_LOG_v1 {"time_micros": 1759324957522381, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.522402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3508779, prev total WAL file size 3508779, number of live WAL files 2.
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.523373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3361KB)], [26(6158KB)]
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957523399, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9748543, "oldest_snapshot_seqno": -1}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3734 keys, 8024534 bytes, temperature: kUnknown
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957633492, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8024534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7995588, "index_size": 18532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89684, "raw_average_key_size": 24, "raw_value_size": 7924204, "raw_average_value_size": 2122, "num_data_blocks": 801, "num_entries": 3734, "num_filter_entries": 3734, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.633768) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8024534 bytes
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.635854) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.5 rd, 72.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4248, records dropped: 514 output_compression: NoCompression
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.635869) EVENT_LOG_v1 {"time_micros": 1759324957635862, "job": 10, "event": "compaction_finished", "compaction_time_micros": 110187, "compaction_time_cpu_micros": 16760, "output_level": 6, "num_output_files": 1, "total_output_size": 8024534, "num_input_records": 4248, "num_output_records": 3734, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957636447, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957637365, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.523299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:22:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:22:37 np0005464214 romantic_raman[188017]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:22:37 np0005464214 romantic_raman[188017]: --> relative data size: 1.0
Oct  1 09:22:37 np0005464214 romantic_raman[188017]: --> All data devices are unavailable
Oct  1 09:22:37 np0005464214 systemd[1]: libpod-af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7.scope: Deactivated successfully.
Oct  1 09:22:37 np0005464214 podman[187999]: 2025-10-01 13:22:37.752702657 +0000 UTC m=+1.453226893 container died af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:22:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98-merged.mount: Deactivated successfully.
Oct  1 09:22:37 np0005464214 podman[188057]: 2025-10-01 13:22:37.934188671 +0000 UTC m=+0.156567894 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:22:37 np0005464214 podman[187999]: 2025-10-01 13:22:37.952881068 +0000 UTC m=+1.653405314 container remove af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:22:37 np0005464214 systemd[1]: libpod-conmon-af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7.scope: Deactivated successfully.
Oct  1 09:22:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.514987367 +0000 UTC m=+0.036892199 container create 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:22:38 np0005464214 systemd[1]: Started libpod-conmon-0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5.scope.
Oct  1 09:22:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.498234791 +0000 UTC m=+0.020139643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.610555966 +0000 UTC m=+0.132460818 container init 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.618133234 +0000 UTC m=+0.140038066 container start 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:22:38 np0005464214 practical_shtern[188275]: 167 167
Oct  1 09:22:38 np0005464214 systemd[1]: libpod-0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5.scope: Deactivated successfully.
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.628062865 +0000 UTC m=+0.149967727 container attach 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.62948439 +0000 UTC m=+0.151389222 container died 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:22:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0313f59403c167c1517e13fbb8a7252ea9b13b4717aa63064c46997d4c44cfe4-merged.mount: Deactivated successfully.
Oct  1 09:22:38 np0005464214 podman[188249]: 2025-10-01 13:22:38.858021141 +0000 UTC m=+0.379925983 container remove 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:22:38 np0005464214 systemd[1]: libpod-conmon-0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5.scope: Deactivated successfully.
Oct  1 09:22:39 np0005464214 podman[188353]: 2025-10-01 13:22:39.057050567 +0000 UTC m=+0.059147828 container create e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:22:39 np0005464214 systemd[1]: Started libpod-conmon-e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19.scope.
Oct  1 09:22:39 np0005464214 podman[188353]: 2025-10-01 13:22:39.019861789 +0000 UTC m=+0.021959080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:22:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:22:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:39 np0005464214 podman[188353]: 2025-10-01 13:22:39.165269612 +0000 UTC m=+0.167366893 container init e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:22:39 np0005464214 podman[188353]: 2025-10-01 13:22:39.175780543 +0000 UTC m=+0.177877804 container start e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:22:39 np0005464214 podman[188353]: 2025-10-01 13:22:39.182862435 +0000 UTC m=+0.184959726 container attach e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]: {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:    "0": [
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:        {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "devices": [
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "/dev/loop3"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            ],
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_name": "ceph_lv0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_size": "21470642176",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "name": "ceph_lv0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "tags": {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cluster_name": "ceph",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.crush_device_class": "",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.encrypted": "0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osd_id": "0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.type": "block",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.vdo": "0"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            },
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "type": "block",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "vg_name": "ceph_vg0"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:        }
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:    ],
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:    "1": [
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:        {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "devices": [
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "/dev/loop4"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            ],
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_name": "ceph_lv1",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_size": "21470642176",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "name": "ceph_lv1",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "tags": {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cluster_name": "ceph",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.crush_device_class": "",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.encrypted": "0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osd_id": "1",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.type": "block",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.vdo": "0"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            },
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "type": "block",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "vg_name": "ceph_vg1"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:        }
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:    ],
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:    "2": [
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:        {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "devices": [
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "/dev/loop5"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            ],
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_name": "ceph_lv2",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_size": "21470642176",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "name": "ceph_lv2",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "tags": {
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.cluster_name": "ceph",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.crush_device_class": "",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.encrypted": "0",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osd_id": "2",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.type": "block",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:                "ceph.vdo": "0"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            },
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "type": "block",
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:            "vg_name": "ceph_vg2"
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:        }
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]:    ]
Oct  1 09:22:39 np0005464214 sleepy_turing[188375]: }
Oct  1 09:22:39 np0005464214 systemd[1]: libpod-e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19.scope: Deactivated successfully.
Oct  1 09:22:39 np0005464214 podman[188353]: 2025-10-01 13:22:39.948921874 +0000 UTC m=+0.951019135 container died e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:22:39 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd-merged.mount: Deactivated successfully.
Oct  1 09:22:40 np0005464214 podman[188353]: 2025-10-01 13:22:40.004316462 +0000 UTC m=+1.006413723 container remove e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:22:40 np0005464214 systemd[1]: libpod-conmon-e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19.scope: Deactivated successfully.
Oct  1 09:22:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.58645858 +0000 UTC m=+0.047836213 container create 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:22:40 np0005464214 systemd[1]: Started libpod-conmon-1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3.scope.
Oct  1 09:22:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.648788216 +0000 UTC m=+0.110165879 container init 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.654882306 +0000 UTC m=+0.116259949 container start 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.561876458 +0000 UTC m=+0.023254111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:22:40 np0005464214 magical_keldysh[188654]: 167 167
Oct  1 09:22:40 np0005464214 systemd[1]: libpod-1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3.scope: Deactivated successfully.
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.659193082 +0000 UTC m=+0.120570715 container attach 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.659811772 +0000 UTC m=+0.121189415 container died 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:22:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5837606e2ff6942b9410683772c8740b21f9d917a97a3e7cec0d83db01331734-merged.mount: Deactivated successfully.
Oct  1 09:22:40 np0005464214 podman[188637]: 2025-10-01 13:22:40.695643186 +0000 UTC m=+0.157020819 container remove 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:22:40 np0005464214 systemd[1]: libpod-conmon-1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3.scope: Deactivated successfully.
Oct  1 09:22:40 np0005464214 podman[188678]: 2025-10-01 13:22:40.84556533 +0000 UTC m=+0.037536768 container create c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:22:40 np0005464214 systemd[1]: Started libpod-conmon-c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e.scope.
Oct  1 09:22:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:22:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:22:40 np0005464214 podman[188678]: 2025-10-01 13:22:40.82769862 +0000 UTC m=+0.019670088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:22:40 np0005464214 podman[188678]: 2025-10-01 13:22:40.92776302 +0000 UTC m=+0.119734478 container init c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:22:40 np0005464214 podman[188678]: 2025-10-01 13:22:40.934898393 +0000 UTC m=+0.126869881 container start c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:22:40 np0005464214 podman[188678]: 2025-10-01 13:22:40.938907869 +0000 UTC m=+0.130879377 container attach c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:22:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:41 np0005464214 boring_austin[188694]: {
Oct  1 09:22:41 np0005464214 boring_austin[188694]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "osd_id": 0,
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "type": "bluestore"
Oct  1 09:22:41 np0005464214 boring_austin[188694]:    },
Oct  1 09:22:41 np0005464214 boring_austin[188694]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "osd_id": 2,
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "type": "bluestore"
Oct  1 09:22:41 np0005464214 boring_austin[188694]:    },
Oct  1 09:22:41 np0005464214 boring_austin[188694]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "osd_id": 1,
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:22:41 np0005464214 boring_austin[188694]:        "type": "bluestore"
Oct  1 09:22:41 np0005464214 boring_austin[188694]:    }
Oct  1 09:22:41 np0005464214 boring_austin[188694]: }
Oct  1 09:22:41 np0005464214 systemd[1]: libpod-c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e.scope: Deactivated successfully.
Oct  1 09:22:41 np0005464214 podman[188678]: 2025-10-01 13:22:41.885435591 +0000 UTC m=+1.077407039 container died c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:22:41 np0005464214 systemd[1]: var-lib-containers-storage-overlay-68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e-merged.mount: Deactivated successfully.
Oct  1 09:22:41 np0005464214 podman[188678]: 2025-10-01 13:22:41.940169438 +0000 UTC m=+1.132140886 container remove c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:22:41 np0005464214 systemd[1]: libpod-conmon-c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e.scope: Deactivated successfully.
Oct  1 09:22:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:22:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:22:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:22:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:22:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 41f14e0c-3a85-46d2-9384-1518184ebd78 does not exist
Oct  1 09:22:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c5db93c9-22a9-4f54-af58-6b55f45d5ad0 does not exist
Oct  1 09:22:43 np0005464214 systemd[1]: Stopping OpenSSH server daemon...
Oct  1 09:22:43 np0005464214 systemd[1]: sshd.service: Deactivated successfully.
Oct  1 09:22:43 np0005464214 systemd[1]: Stopped OpenSSH server daemon.
Oct  1 09:22:43 np0005464214 systemd[1]: sshd.service: Consumed 12.382s CPU time, read 0B from disk, written 316.0K to disk.
Oct  1 09:22:43 np0005464214 systemd[1]: Stopped target sshd-keygen.target.
Oct  1 09:22:43 np0005464214 systemd[1]: Stopping sshd-keygen.target...
Oct  1 09:22:43 np0005464214 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 09:22:43 np0005464214 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 09:22:43 np0005464214 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  1 09:22:43 np0005464214 systemd[1]: Reached target sshd-keygen.target.
Oct  1 09:22:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:22:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:22:43 np0005464214 systemd[1]: Starting OpenSSH server daemon...
Oct  1 09:22:43 np0005464214 systemd[1]: Started OpenSSH server daemon.
Oct  1 09:22:43 np0005464214 podman[189412]: 2025-10-01 13:22:43.139541095 +0000 UTC m=+0.100968999 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:22:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:46 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:22:46 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:22:46 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:46 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:22:46 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:46 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:22:47
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'backups']
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:22:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:22:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:49 np0005464214 systemd[1]: Starting PackageKit Daemon...
Oct  1 09:22:49 np0005464214 systemd[1]: Started PackageKit Daemon.
Oct  1 09:22:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:51 np0005464214 python3.9[193705]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:22:51 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:51 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:22:51 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:52 np0005464214 python3.9[194883]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:22:52 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:52 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:52 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:22:53 np0005464214 python3.9[195873]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:22:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:54 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:54 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:22:54 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:55 np0005464214 python3.9[197768]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:22:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:22:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:56 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:57 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:22:57 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:22:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:22:57 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:22:57 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:22:57 np0005464214 systemd[1]: man-db-cache-update.service: Consumed 12.712s CPU time.
Oct  1 09:22:57 np0005464214 systemd[1]: run-r7c7aca91b9df4ef2a4709283f7a78074.service: Deactivated successfully.
Oct  1 09:22:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:22:58 np0005464214 python3.9[198848]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:22:58 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:58 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:58 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:22:59 np0005464214 python3.9[199040]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:22:59 np0005464214 systemd[1]: Reloading.
Oct  1 09:22:59 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:22:59 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:23:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:00 np0005464214 python3.9[199230]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:00 np0005464214 systemd[1]: Reloading.
Oct  1 09:23:00 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:23:00 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:23:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:01 np0005464214 python3.9[199420]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:02 np0005464214 python3.9[199575]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:03 np0005464214 systemd[1]: Reloading.
Oct  1 09:23:03 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:23:03 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:23:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:04 np0005464214 python3.9[199765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  1 09:23:04 np0005464214 systemd[1]: Reloading.
Oct  1 09:23:04 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:23:04 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:23:04 np0005464214 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  1 09:23:04 np0005464214 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  1 09:23:05 np0005464214 python3.9[199958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:06 np0005464214 python3.9[200113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:07 np0005464214 python3.9[200268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:08 np0005464214 podman[200395]: 2025-10-01 13:23:08.53082434 +0000 UTC m=+0.139027813 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 09:23:08 np0005464214 python3.9[200443]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:09 np0005464214 python3.9[200606]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:10 np0005464214 python3.9[200761]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:12 np0005464214 python3.9[200916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:23:12.287 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:23:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:23:12.289 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:23:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:23:12.289 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:23:13 np0005464214 python3.9[201071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:13 np0005464214 podman[201073]: 2025-10-01 13:23:13.530634204 +0000 UTC m=+0.080941571 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Oct  1 09:23:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:15 np0005464214 python3.9[201245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:16 np0005464214 python3.9[201400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:16 np0005464214 python3.9[201555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:23:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:23:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:23:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:23:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:23:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:23:17 np0005464214 python3.9[201711]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:18 np0005464214 python3.9[201866]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:20 np0005464214 python3.9[202021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  1 09:23:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:21 np0005464214 python3.9[202176]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:23:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:22 np0005464214 python3.9[202328]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:23:22 np0005464214 python3.9[202480]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:23:23 np0005464214 python3.9[202632]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:23:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:24 np0005464214 python3.9[202784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:23:25 np0005464214 python3.9[202936]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:23:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:26 np0005464214 python3.9[203088]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:27 np0005464214 python3.9[203213]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325005.7306333-554-194252961523567/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:28 np0005464214 python3.9[203365]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:29 np0005464214 python3.9[203490]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325007.7551806-554-56120987708746/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:30 np0005464214 python3.9[203642]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:30 np0005464214 python3.9[203767]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325009.527832-554-24432632567439/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:31 np0005464214 python3.9[203919]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:32 np0005464214 python3.9[204044]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325011.2296195-554-25795081733247/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:33 np0005464214 python3.9[204196]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:34 np0005464214 python3.9[204321]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325012.9134412-554-116679003871970/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:35 np0005464214 python3.9[204475]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:36 np0005464214 python3.9[204600]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325014.8816445-554-165862113291073/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:37 np0005464214 python3.9[204754]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:38 np0005464214 python3.9[204877]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325016.6788673-554-159086594126385/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:38 np0005464214 podman[205029]: 2025-10-01 13:23:38.825829617 +0000 UTC m=+0.170675929 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923)
Oct  1 09:23:38 np0005464214 python3.9[205030]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:39 np0005464214 python3.9[205181]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325018.2714937-554-22007393710970/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:40 np0005464214 python3.9[205333]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  1 09:23:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:41 np0005464214 python3.9[205488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:42 np0005464214 python3.9[205650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:23:43 np0005464214 python3.9[205925]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:23:43 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 95d593c5-19b8-471c-88e1-8870bf7d6cb1 does not exist
Oct  1 09:23:43 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a177e8bc-53b3-4e42-8604-cb54771e19e4 does not exist
Oct  1 09:23:43 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 032432ac-0e98-4f97-b989-bdc9b6ca3746 does not exist
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:23:43 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:23:43 np0005464214 podman[206074]: 2025-10-01 13:23:43.707886127 +0000 UTC m=+0.074690457 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  1 09:23:43 np0005464214 python3.9[206193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.107962019 +0000 UTC m=+0.026115638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.212120207 +0000 UTC m=+0.130273816 container create 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:23:44 np0005464214 systemd[1]: Started libpod-conmon-6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87.scope.
Oct  1 09:23:44 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.393302353 +0000 UTC m=+0.311456042 container init 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.404280216 +0000 UTC m=+0.322433825 container start 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:23:44 np0005464214 awesome_mendeleev[206353]: 167 167
Oct  1 09:23:44 np0005464214 systemd[1]: libpod-6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87.scope: Deactivated successfully.
Oct  1 09:23:44 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:23:44 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:23:44 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.422269159 +0000 UTC m=+0.340422778 container attach 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.423323252 +0000 UTC m=+0.341476861 container died 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:23:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e172d2f2e749272905e52d7ecfa992f91dbc03674015789f55b57273d6136daa-merged.mount: Deactivated successfully.
Oct  1 09:23:44 np0005464214 python3.9[206422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:44 np0005464214 podman[206250]: 2025-10-01 13:23:44.801085516 +0000 UTC m=+0.719239155 container remove 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:23:44 np0005464214 systemd[1]: libpod-conmon-6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87.scope: Deactivated successfully.
Oct  1 09:23:44 np0005464214 podman[206507]: 2025-10-01 13:23:44.995721314 +0000 UTC m=+0.065567792 container create c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:23:45 np0005464214 podman[206507]: 2025-10-01 13:23:44.953087611 +0000 UTC m=+0.022934089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:23:45 np0005464214 systemd[1]: Started libpod-conmon-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope.
Oct  1 09:23:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:23:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:45 np0005464214 podman[206507]: 2025-10-01 13:23:45.135222037 +0000 UTC m=+0.205068535 container init c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:23:45 np0005464214 podman[206507]: 2025-10-01 13:23:45.142818434 +0000 UTC m=+0.212664892 container start c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:23:45 np0005464214 podman[206507]: 2025-10-01 13:23:45.167088844 +0000 UTC m=+0.236935322 container attach c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:23:45 np0005464214 python3.9[206604]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:46 np0005464214 python3.9[206757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:46 np0005464214 dazzling_wilbur[206571]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:23:46 np0005464214 dazzling_wilbur[206571]: --> relative data size: 1.0
Oct  1 09:23:46 np0005464214 dazzling_wilbur[206571]: --> All data devices are unavailable
Oct  1 09:23:46 np0005464214 systemd[1]: libpod-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope: Deactivated successfully.
Oct  1 09:23:46 np0005464214 systemd[1]: libpod-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope: Consumed 1.150s CPU time.
Oct  1 09:23:46 np0005464214 podman[206507]: 2025-10-01 13:23:46.38163941 +0000 UTC m=+1.451485908 container died c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:23:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6-merged.mount: Deactivated successfully.
Oct  1 09:23:46 np0005464214 podman[206507]: 2025-10-01 13:23:46.803644928 +0000 UTC m=+1.873491386 container remove c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:23:46 np0005464214 systemd[1]: libpod-conmon-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope: Deactivated successfully.
Oct  1 09:23:46 np0005464214 python3.9[206946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:47 np0005464214 python3.9[207206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:47 np0005464214 podman[207238]: 2025-10-01 13:23:47.503894499 +0000 UTC m=+0.041028664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:23:47
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta']
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:23:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:23:47 np0005464214 podman[207238]: 2025-10-01 13:23:47.90198951 +0000 UTC m=+0.439123675 container create e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:23:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:48 np0005464214 systemd[1]: Started libpod-conmon-e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728.scope.
Oct  1 09:23:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:23:48 np0005464214 podman[207238]: 2025-10-01 13:23:48.340100151 +0000 UTC m=+0.877234346 container init e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:23:48 np0005464214 podman[207238]: 2025-10-01 13:23:48.354010586 +0000 UTC m=+0.891144741 container start e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:23:48 np0005464214 awesome_volhard[207407]: 167 167
Oct  1 09:23:48 np0005464214 systemd[1]: libpod-e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728.scope: Deactivated successfully.
Oct  1 09:23:48 np0005464214 python3.9[207404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:48 np0005464214 podman[207238]: 2025-10-01 13:23:48.505441813 +0000 UTC m=+1.042575988 container attach e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:23:48 np0005464214 podman[207238]: 2025-10-01 13:23:48.506063692 +0000 UTC m=+1.043197867 container died e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:23:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-02c434fbeb79d0abbc72a980dd16a3a4f41c97180dda940963650112b0ed24d7-merged.mount: Deactivated successfully.
Oct  1 09:23:49 np0005464214 python3.9[207578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:49 np0005464214 podman[207238]: 2025-10-01 13:23:49.481979854 +0000 UTC m=+2.019114059 container remove e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:23:49 np0005464214 systemd[1]: libpod-conmon-e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728.scope: Deactivated successfully.
Oct  1 09:23:49 np0005464214 podman[207676]: 2025-10-01 13:23:49.632321337 +0000 UTC m=+0.025501939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:23:49 np0005464214 podman[207676]: 2025-10-01 13:23:49.796197322 +0000 UTC m=+0.189377914 container create 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:23:50 np0005464214 systemd[1]: Started libpod-conmon-9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c.scope.
Oct  1 09:23:50 np0005464214 python3.9[207751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:23:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:50 np0005464214 podman[207676]: 2025-10-01 13:23:50.213888065 +0000 UTC m=+0.607068667 container init 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:23:50 np0005464214 podman[207676]: 2025-10-01 13:23:50.226080007 +0000 UTC m=+0.619260569 container start 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:23:50 np0005464214 podman[207676]: 2025-10-01 13:23:50.372025982 +0000 UTC m=+0.765206534 container attach 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:23:50 np0005464214 python3.9[207911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:51 np0005464214 zealous_banach[207755]: {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:    "0": [
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:        {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "devices": [
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "/dev/loop3"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            ],
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_name": "ceph_lv0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_size": "21470642176",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "name": "ceph_lv0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "tags": {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cluster_name": "ceph",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.crush_device_class": "",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.encrypted": "0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osd_id": "0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.type": "block",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.vdo": "0"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            },
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "type": "block",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "vg_name": "ceph_vg0"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:        }
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:    ],
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:    "1": [
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:        {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "devices": [
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "/dev/loop4"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            ],
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_name": "ceph_lv1",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_size": "21470642176",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "name": "ceph_lv1",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "tags": {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cluster_name": "ceph",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.crush_device_class": "",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.encrypted": "0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osd_id": "1",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.type": "block",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.vdo": "0"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            },
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "type": "block",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "vg_name": "ceph_vg1"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:        }
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:    ],
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:    "2": [
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:        {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "devices": [
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "/dev/loop5"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            ],
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_name": "ceph_lv2",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_size": "21470642176",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "name": "ceph_lv2",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "tags": {
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.cluster_name": "ceph",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.crush_device_class": "",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.encrypted": "0",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osd_id": "2",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.type": "block",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:                "ceph.vdo": "0"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            },
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "type": "block",
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:            "vg_name": "ceph_vg2"
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:        }
Oct  1 09:23:51 np0005464214 zealous_banach[207755]:    ]
Oct  1 09:23:51 np0005464214 zealous_banach[207755]: }
Oct  1 09:23:51 np0005464214 systemd[1]: libpod-9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c.scope: Deactivated successfully.
Oct  1 09:23:51 np0005464214 podman[207676]: 2025-10-01 13:23:51.148488205 +0000 UTC m=+1.541668757 container died 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:23:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:51 np0005464214 python3.9[208078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:52 np0005464214 python3.9[208233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a-merged.mount: Deactivated successfully.
Oct  1 09:23:53 np0005464214 python3.9[208356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325031.8448923-775-27922788647184/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:53 np0005464214 podman[207676]: 2025-10-01 13:23:53.112751649 +0000 UTC m=+3.505932201 container remove 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:23:53 np0005464214 systemd[1]: libpod-conmon-9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c.scope: Deactivated successfully.
Oct  1 09:23:53 np0005464214 podman[208648]: 2025-10-01 13:23:53.793384166 +0000 UTC m=+0.028131260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:23:53 np0005464214 python3.9[208626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:53 np0005464214 podman[208648]: 2025-10-01 13:23:53.923840456 +0000 UTC m=+0.158587480 container create b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:23:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:54 np0005464214 systemd[1]: Started libpod-conmon-b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e.scope.
Oct  1 09:23:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:23:54 np0005464214 podman[208648]: 2025-10-01 13:23:54.585022545 +0000 UTC m=+0.819769599 container init b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:23:54 np0005464214 python3.9[208789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325033.291063-775-122841392076707/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:54 np0005464214 podman[208648]: 2025-10-01 13:23:54.599366164 +0000 UTC m=+0.834113208 container start b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 09:23:54 np0005464214 lucid_bartik[208711]: 167 167
Oct  1 09:23:54 np0005464214 systemd[1]: libpod-b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e.scope: Deactivated successfully.
Oct  1 09:23:54 np0005464214 podman[208648]: 2025-10-01 13:23:54.785572347 +0000 UTC m=+1.020319381 container attach b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:23:54 np0005464214 podman[208648]: 2025-10-01 13:23:54.786770265 +0000 UTC m=+1.021517309 container died b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:23:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6cc609d8585343811c902111e2ff63fdf7abd313f068fa24f3de90319e2a1240-merged.mount: Deactivated successfully.
Oct  1 09:23:55 np0005464214 python3.9[208957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:55 np0005464214 podman[208648]: 2025-10-01 13:23:55.750164376 +0000 UTC m=+1.984911400 container remove b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:23:55 np0005464214 systemd[1]: libpod-conmon-b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e.scope: Deactivated successfully.
Oct  1 09:23:56 np0005464214 podman[209088]: 2025-10-01 13:23:55.936167553 +0000 UTC m=+0.040455116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:23:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:56 np0005464214 python3.9[209082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325034.8048651-775-24907974328116/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:23:56 np0005464214 podman[209088]: 2025-10-01 13:23:56.224979126 +0000 UTC m=+0.329266589 container create d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:23:56 np0005464214 systemd[1]: Started libpod-conmon-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope.
Oct  1 09:23:56 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:23:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:23:56 np0005464214 podman[209088]: 2025-10-01 13:23:56.898659255 +0000 UTC m=+1.002946748 container init d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:23:56 np0005464214 python3.9[209258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:56 np0005464214 podman[209088]: 2025-10-01 13:23:56.907955816 +0000 UTC m=+1.012243309 container start d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:23:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:23:57 np0005464214 podman[209088]: 2025-10-01 13:23:57.173825751 +0000 UTC m=+1.278113224 container attach d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:23:57 np0005464214 python3.9[209383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325036.3326578-775-243850441185896/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]: {
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "osd_id": 0,
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "type": "bluestore"
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:    },
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "osd_id": 2,
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "type": "bluestore"
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:    },
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "osd_id": 1,
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:        "type": "bluestore"
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]:    }
Oct  1 09:23:57 np0005464214 dreamy_shtern[209206]: }
Oct  1 09:23:57 np0005464214 systemd[1]: libpod-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope: Deactivated successfully.
Oct  1 09:23:57 np0005464214 podman[209088]: 2025-10-01 13:23:57.923966822 +0000 UTC m=+2.028254305 container died d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  1 09:23:57 np0005464214 systemd[1]: libpod-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope: Consumed 1.017s CPU time.
Oct  1 09:23:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:23:58 np0005464214 python3.9[209575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4-merged.mount: Deactivated successfully.
Oct  1 09:23:58 np0005464214 python3.9[209701]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325037.7195654-775-182480378900317/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:23:59 np0005464214 podman[209088]: 2025-10-01 13:23:59.155154388 +0000 UTC m=+3.259441851 container remove d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:23:59 np0005464214 systemd[1]: libpod-conmon-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope: Deactivated successfully.
Oct  1 09:23:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:23:59 np0005464214 python3.9[209853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:23:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:23:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:23:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:23:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9369c22b-fe97-40b3-9d2c-974befbac54c does not exist
Oct  1 09:23:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev cb6f7ccb-3cae-42db-8989-54d0c28f90a9 does not exist
Oct  1 09:24:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:00 np0005464214 python3.9[210024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325039.041347-775-19637778349182/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:24:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:24:00 np0005464214 python3.9[210178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:01 np0005464214 python3.9[210301]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325040.3530219-775-61896809371182/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:02 np0005464214 python3.9[210455]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:02 np0005464214 python3.9[210578]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325041.650386-775-65738693192394/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:03 np0005464214 python3.9[210730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:04 np0005464214 python3.9[210853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325043.0389986-775-137470776848184/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:05 np0005464214 python3.9[211005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:05 np0005464214 python3.9[211128]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325044.4371088-775-246016588746173/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:06 np0005464214 python3.9[211280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:07 np0005464214 python3.9[211403]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325045.9663014-775-148177564735614/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:08 np0005464214 python3.9[211555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:08 np0005464214 python3.9[211678]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325047.431467-775-144001800504323/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:09 np0005464214 podman[211802]: 2025-10-01 13:24:09.467571425 +0000 UTC m=+0.133779136 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.build-date=20250923, container_name=ovn_controller)
Oct  1 09:24:09 np0005464214 python3.9[211843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:10 np0005464214 python3.9[211979]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325048.9841897-775-18471621416948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:11 np0005464214 python3.9[212131]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:11 np0005464214 python3.9[212254]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325050.587429-775-81249689093696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:24:12.288 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:24:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:24:12.288 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:24:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:24:12.289 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:24:12 np0005464214 python3.9[212404]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:24:13 np0005464214 python3.9[212559]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  1 09:24:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:14 np0005464214 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  1 09:24:14 np0005464214 podman[212562]: 2025-10-01 13:24:14.549474695 +0000 UTC m=+0.085673811 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:24:15 np0005464214 python3.9[212735]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:16 np0005464214 python3.9[212887]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:17 np0005464214 python3.9[213039]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:24:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:24:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:24:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:24:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:24:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:24:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:18 np0005464214 python3.9[213191]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:18 np0005464214 python3.9[213343]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:19 np0005464214 python3.9[213495]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:20 np0005464214 python3.9[213647]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:21 np0005464214 python3.9[213799]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:21 np0005464214 python3.9[213951]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:22 np0005464214 python3.9[214103]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:23 np0005464214 python3.9[214255]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:24:23 np0005464214 systemd[1]: Reloading.
Oct  1 09:24:23 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:24:23 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:24:24 np0005464214 systemd[1]: Starting libvirt logging daemon socket...
Oct  1 09:24:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:24 np0005464214 systemd[1]: Listening on libvirt logging daemon socket.
Oct  1 09:24:24 np0005464214 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  1 09:24:24 np0005464214 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  1 09:24:24 np0005464214 systemd[1]: Starting libvirt logging daemon...
Oct  1 09:24:24 np0005464214 systemd[1]: Started libvirt logging daemon.
Oct  1 09:24:25 np0005464214 python3.9[214449]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:24:25 np0005464214 systemd[1]: Reloading.
Oct  1 09:24:25 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:24:25 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:24:25 np0005464214 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  1 09:24:25 np0005464214 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  1 09:24:25 np0005464214 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  1 09:24:25 np0005464214 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  1 09:24:25 np0005464214 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  1 09:24:25 np0005464214 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  1 09:24:25 np0005464214 systemd[1]: Starting libvirt nodedev daemon...
Oct  1 09:24:25 np0005464214 systemd[1]: Started libvirt nodedev daemon.
Oct  1 09:24:26 np0005464214 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  1 09:24:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:26 np0005464214 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  1 09:24:26 np0005464214 python3.9[214666]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:24:26 np0005464214 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Oct  1 09:24:26 np0005464214 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  1 09:24:26 np0005464214 systemd[1]: Reloading.
Oct  1 09:24:26 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:24:26 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:24:27 np0005464214 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  1 09:24:27 np0005464214 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  1 09:24:27 np0005464214 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  1 09:24:27 np0005464214 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  1 09:24:27 np0005464214 systemd[1]: Starting libvirt proxy daemon...
Oct  1 09:24:27 np0005464214 systemd[1]: Started libvirt proxy daemon.
Oct  1 09:24:27 np0005464214 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bd15bc-9fdd-45ca-9013-6e2ad0770344
Oct  1 09:24:27 np0005464214 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  1 09:24:27 np0005464214 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bd15bc-9fdd-45ca-9013-6e2ad0770344
Oct  1 09:24:27 np0005464214 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  1 09:24:28 np0005464214 python3.9[214884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:24:28 np0005464214 systemd[1]: Reloading.
Oct  1 09:24:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:28 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:24:28 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:24:28 np0005464214 systemd[1]: Listening on libvirt locking daemon socket.
Oct  1 09:24:28 np0005464214 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  1 09:24:28 np0005464214 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  1 09:24:28 np0005464214 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  1 09:24:28 np0005464214 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  1 09:24:28 np0005464214 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  1 09:24:28 np0005464214 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  1 09:24:28 np0005464214 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  1 09:24:28 np0005464214 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  1 09:24:28 np0005464214 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  1 09:24:28 np0005464214 systemd[1]: Starting libvirt QEMU daemon...
Oct  1 09:24:28 np0005464214 systemd[1]: Started libvirt QEMU daemon.
Oct  1 09:24:29 np0005464214 python3.9[215097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:24:29 np0005464214 systemd[1]: Reloading.
Oct  1 09:24:29 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:24:29 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:24:30 np0005464214 systemd[1]: Starting libvirt secret daemon socket...
Oct  1 09:24:30 np0005464214 systemd[1]: Listening on libvirt secret daemon socket.
Oct  1 09:24:30 np0005464214 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  1 09:24:30 np0005464214 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  1 09:24:30 np0005464214 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  1 09:24:30 np0005464214 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  1 09:24:30 np0005464214 systemd[1]: Starting libvirt secret daemon...
Oct  1 09:24:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:30 np0005464214 systemd[1]: Started libvirt secret daemon.
Oct  1 09:24:31 np0005464214 python3.9[215306]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:31 np0005464214 python3.9[215458]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 09:24:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:32 np0005464214 python3.9[215610]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:24:33 np0005464214 python3.9[215764]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 09:24:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:34 np0005464214 python3.9[215914]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:35 np0005464214 python3.9[216035]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325074.1762452-1133-40952870331326/.source.xml follow=False _original_basename=secret.xml.j2 checksum=85ea94ee6dc7b38556452772c4b1cde316396f1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:36 np0005464214 python3.9[216187]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine eb4b6ead-01d1-53b3-a52a-47dcc600555f#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:24:37 np0005464214 python3.9[216349]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:37 np0005464214 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  1 09:24:37 np0005464214 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.046s CPU time.
Oct  1 09:24:37 np0005464214 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  1 09:24:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:40 np0005464214 podman[216784]: 2025-10-01 13:24:40.491667193 +0000 UTC m=+0.162236002 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:24:40 np0005464214 python3.9[216830]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:41 np0005464214 python3.9[216990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:42 np0005464214 python3.9[217113]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325080.856165-1188-101214582124983/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:43 np0005464214 python3.9[217265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:44 np0005464214 python3.9[217419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:44 np0005464214 python3.9[217497]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:45 np0005464214 podman[217621]: 2025-10-01 13:24:45.336328223 +0000 UTC m=+0.095171794 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct  1 09:24:45 np0005464214 python3.9[217666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:46 np0005464214 python3.9[217747]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9rmonhe4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:46 np0005464214 python3.9[217901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:47 np0005464214 python3.9[217982]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:24:47
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control']
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:24:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:24:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:48 np0005464214 python3.9[218134]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:24:49 np0005464214 python3[218287]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  1 09:24:49 np0005464214 python3.9[218439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:50 np0005464214 python3.9[218517]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:51 np0005464214 python3.9[218669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:51 np0005464214 python3.9[218747]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:52 np0005464214 python3.9[218899]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:52 np0005464214 python3.9[218977]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:53 np0005464214 python3.9[219129]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:54 np0005464214 python3.9[219207]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:55 np0005464214 python3.9[219359]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:24:56 np0005464214 python3.9[219484]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325094.3993096-1313-263158143723684/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:24:56 np0005464214 python3.9[219636]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:24:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:24:57 np0005464214 python3.9[219788]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:24:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:24:58 np0005464214 python3.9[219943]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:24:59 np0005464214 python3.9[220095]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:25:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:00 np0005464214 python3.9[220254]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:25:00 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e78f39d1-32f4-497a-9a9c-dbfdb2b3a7a1 does not exist
Oct  1 09:25:00 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d8eadeab-e9e9-4c2a-9312-eb9e4ab786cf does not exist
Oct  1 09:25:00 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e0279758-6973-40c8-883b-eaa5a9ede712 does not exist
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:25:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:25:00 np0005464214 python3.9[220533]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:25:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:25:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:25:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.456554393 +0000 UTC m=+0.045794586 container create ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:25:01 np0005464214 systemd[1]: Started libpod-conmon-ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e.scope.
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.433537387 +0000 UTC m=+0.022777580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:25:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.56724107 +0000 UTC m=+0.156481283 container init ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.576642172 +0000 UTC m=+0.165882375 container start ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:25:01 np0005464214 flamboyant_shamir[220844]: 167 167
Oct  1 09:25:01 np0005464214 systemd[1]: libpod-ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e.scope: Deactivated successfully.
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.589323048 +0000 UTC m=+0.178563241 container attach ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.590018059 +0000 UTC m=+0.179258272 container died ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:25:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-52eeabc56561c559152fcc704647706c67815a872f4e2d5dc577305f67b8efe0-merged.mount: Deactivated successfully.
Oct  1 09:25:01 np0005464214 python3.9[220846]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:01 np0005464214 podman[220800]: 2025-10-01 13:25:01.745965395 +0000 UTC m=+0.335205588 container remove ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:25:01 np0005464214 systemd[1]: libpod-conmon-ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e.scope: Deactivated successfully.
Oct  1 09:25:01 np0005464214 podman[220897]: 2025-10-01 13:25:01.931565614 +0000 UTC m=+0.056633255 container create 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:25:01 np0005464214 systemd[1]: Started libpod-conmon-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope.
Oct  1 09:25:02 np0005464214 podman[220897]: 2025-10-01 13:25:01.908623509 +0000 UTC m=+0.033691160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:25:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:02 np0005464214 podman[220897]: 2025-10-01 13:25:02.04547194 +0000 UTC m=+0.170539601 container init 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:25:02 np0005464214 podman[220897]: 2025-10-01 13:25:02.065805503 +0000 UTC m=+0.190873124 container start 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:25:02 np0005464214 podman[220897]: 2025-10-01 13:25:02.069973992 +0000 UTC m=+0.195041653 container attach 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:25:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:02 np0005464214 python3.9[221043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:03 np0005464214 python3.9[221168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325101.9024787-1385-123753754337344/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:03 np0005464214 cranky_dirac[220963]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:25:03 np0005464214 cranky_dirac[220963]: --> relative data size: 1.0
Oct  1 09:25:03 np0005464214 cranky_dirac[220963]: --> All data devices are unavailable
Oct  1 09:25:03 np0005464214 systemd[1]: libpod-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope: Deactivated successfully.
Oct  1 09:25:03 np0005464214 systemd[1]: libpod-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope: Consumed 1.172s CPU time.
Oct  1 09:25:03 np0005464214 podman[220897]: 2025-10-01 13:25:03.29551163 +0000 UTC m=+1.420579261 container died 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:25:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24-merged.mount: Deactivated successfully.
Oct  1 09:25:03 np0005464214 podman[220897]: 2025-10-01 13:25:03.393231983 +0000 UTC m=+1.518299624 container remove 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:25:03 np0005464214 systemd[1]: libpod-conmon-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope: Deactivated successfully.
Oct  1 09:25:03 np0005464214 python3.9[221403]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.154557898 +0000 UTC m=+0.078397412 container create 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:25:04 np0005464214 systemd[1]: Started libpod-conmon-21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069.scope.
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.116186632 +0000 UTC m=+0.040026166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:25:04 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.288335173 +0000 UTC m=+0.212174767 container init 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.302019358 +0000 UTC m=+0.225858892 container start 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:25:04 np0005464214 friendly_mendeleev[221635]: 167 167
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.310249285 +0000 UTC m=+0.234088829 container attach 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:25:04 np0005464214 systemd[1]: libpod-21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069.scope: Deactivated successfully.
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.311353419 +0000 UTC m=+0.235192953 container died 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:25:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-09e57012ea0df3ddfb328834bf7d0b09e6541bf6e4f588f79af1fdd89e626850-merged.mount: Deactivated successfully.
Oct  1 09:25:04 np0005464214 podman[221576]: 2025-10-01 13:25:04.379005655 +0000 UTC m=+0.302845199 container remove 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:25:04 np0005464214 python3.9[221634]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325103.2456582-1400-223056748277787/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:04 np0005464214 systemd[1]: libpod-conmon-21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069.scope: Deactivated successfully.
Oct  1 09:25:04 np0005464214 podman[221686]: 2025-10-01 13:25:04.696204931 +0000 UTC m=+0.120818353 container create 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:25:04 np0005464214 podman[221686]: 2025-10-01 13:25:04.62067423 +0000 UTC m=+0.045287702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:25:04 np0005464214 systemd[1]: Started libpod-conmon-31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be.scope.
Oct  1 09:25:04 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:04 np0005464214 podman[221686]: 2025-10-01 13:25:04.942908973 +0000 UTC m=+0.367522435 container init 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:25:04 np0005464214 podman[221686]: 2025-10-01 13:25:04.959976454 +0000 UTC m=+0.384589836 container start 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:25:04 np0005464214 podman[221686]: 2025-10-01 13:25:04.964185425 +0000 UTC m=+0.388798907 container attach 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct  1 09:25:05 np0005464214 python3.9[221834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:05 np0005464214 confident_nash[221800]: {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:    "0": [
Oct  1 09:25:05 np0005464214 confident_nash[221800]:        {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "devices": [
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "/dev/loop3"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            ],
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_name": "ceph_lv0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_size": "21470642176",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "name": "ceph_lv0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "tags": {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cluster_name": "ceph",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.crush_device_class": "",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.encrypted": "0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osd_id": "0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.type": "block",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.vdo": "0"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            },
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "type": "block",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "vg_name": "ceph_vg0"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:        }
Oct  1 09:25:05 np0005464214 confident_nash[221800]:    ],
Oct  1 09:25:05 np0005464214 confident_nash[221800]:    "1": [
Oct  1 09:25:05 np0005464214 confident_nash[221800]:        {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "devices": [
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "/dev/loop4"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            ],
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_name": "ceph_lv1",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_size": "21470642176",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "name": "ceph_lv1",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "tags": {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cluster_name": "ceph",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.crush_device_class": "",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.encrypted": "0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osd_id": "1",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.type": "block",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.vdo": "0"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            },
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "type": "block",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "vg_name": "ceph_vg1"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:        }
Oct  1 09:25:05 np0005464214 confident_nash[221800]:    ],
Oct  1 09:25:05 np0005464214 confident_nash[221800]:    "2": [
Oct  1 09:25:05 np0005464214 confident_nash[221800]:        {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "devices": [
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "/dev/loop5"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            ],
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_name": "ceph_lv2",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_size": "21470642176",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "name": "ceph_lv2",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "tags": {
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.cluster_name": "ceph",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.crush_device_class": "",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.encrypted": "0",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osd_id": "2",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.type": "block",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:                "ceph.vdo": "0"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            },
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "type": "block",
Oct  1 09:25:05 np0005464214 confident_nash[221800]:            "vg_name": "ceph_vg2"
Oct  1 09:25:05 np0005464214 confident_nash[221800]:        }
Oct  1 09:25:05 np0005464214 confident_nash[221800]:    ]
Oct  1 09:25:05 np0005464214 confident_nash[221800]: }
Oct  1 09:25:05 np0005464214 systemd[1]: libpod-31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be.scope: Deactivated successfully.
Oct  1 09:25:05 np0005464214 podman[221686]: 2025-10-01 13:25:05.760670144 +0000 UTC m=+1.185283536 container died 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:25:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333-merged.mount: Deactivated successfully.
Oct  1 09:25:05 np0005464214 python3.9[221958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325104.6157143-1415-157791203764164/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:05 np0005464214 podman[221686]: 2025-10-01 13:25:05.849871611 +0000 UTC m=+1.274485003 container remove 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:25:05 np0005464214 systemd[1]: libpod-conmon-31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be.scope: Deactivated successfully.
Oct  1 09:25:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:06 np0005464214 podman[222269]: 2025-10-01 13:25:06.634846502 +0000 UTC m=+0.093541774 container create 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:25:06 np0005464214 podman[222269]: 2025-10-01 13:25:06.572365887 +0000 UTC m=+0.031061189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:25:06 np0005464214 systemd[1]: Started libpod-conmon-198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392.scope.
Oct  1 09:25:06 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:06 np0005464214 python3.9[222241]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:25:06 np0005464214 podman[222269]: 2025-10-01 13:25:06.75008341 +0000 UTC m=+0.208778792 container init 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:25:06 np0005464214 podman[222269]: 2025-10-01 13:25:06.76135858 +0000 UTC m=+0.220053882 container start 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:25:06 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:06 np0005464214 inspiring_satoshi[222285]: 167 167
Oct  1 09:25:06 np0005464214 podman[222269]: 2025-10-01 13:25:06.784357107 +0000 UTC m=+0.243052389 container attach 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:25:06 np0005464214 podman[222269]: 2025-10-01 13:25:06.786122312 +0000 UTC m=+0.244817574 container died 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 09:25:06 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:06 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:07 np0005464214 systemd[1]: libpod-198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392.scope: Deactivated successfully.
Oct  1 09:25:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-31fd494f8f7426187340c67ae2f8b34000f993f3dd2e092a6a5f2b929ae5f32a-merged.mount: Deactivated successfully.
Oct  1 09:25:07 np0005464214 systemd[1]: Reached target edpm_libvirt.target.
Oct  1 09:25:07 np0005464214 podman[222269]: 2025-10-01 13:25:07.262062931 +0000 UTC m=+0.720758223 container remove 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:25:07 np0005464214 systemd[1]: libpod-conmon-198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392.scope: Deactivated successfully.
Oct  1 09:25:07 np0005464214 podman[222405]: 2025-10-01 13:25:07.49713299 +0000 UTC m=+0.048961856 container create bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:25:07 np0005464214 systemd[1]: Started libpod-conmon-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope.
Oct  1 09:25:07 np0005464214 podman[222405]: 2025-10-01 13:25:07.47690168 +0000 UTC m=+0.028730596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:25:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:25:07 np0005464214 podman[222405]: 2025-10-01 13:25:07.642341341 +0000 UTC m=+0.194170227 container init bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:25:07 np0005464214 podman[222405]: 2025-10-01 13:25:07.656587514 +0000 UTC m=+0.208416410 container start bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:25:07 np0005464214 podman[222405]: 2025-10-01 13:25:07.674162261 +0000 UTC m=+0.225991147 container attach bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:25:08 np0005464214 python3.9[222527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  1 09:25:08 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:08 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:08 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:08 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:08 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:08 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:08 np0005464214 kind_villani[222470]: {
Oct  1 09:25:08 np0005464214 kind_villani[222470]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "osd_id": 0,
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "type": "bluestore"
Oct  1 09:25:08 np0005464214 kind_villani[222470]:    },
Oct  1 09:25:08 np0005464214 kind_villani[222470]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "osd_id": 2,
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "type": "bluestore"
Oct  1 09:25:08 np0005464214 kind_villani[222470]:    },
Oct  1 09:25:08 np0005464214 kind_villani[222470]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "osd_id": 1,
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:25:08 np0005464214 kind_villani[222470]:        "type": "bluestore"
Oct  1 09:25:08 np0005464214 kind_villani[222470]:    }
Oct  1 09:25:08 np0005464214 kind_villani[222470]: }
Oct  1 09:25:08 np0005464214 podman[222405]: 2025-10-01 13:25:08.682547968 +0000 UTC m=+1.234376844 container died bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:25:08 np0005464214 systemd[1]: libpod-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope: Deactivated successfully.
Oct  1 09:25:08 np0005464214 systemd[1]: libpod-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope: Consumed 1.015s CPU time.
Oct  1 09:25:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766-merged.mount: Deactivated successfully.
Oct  1 09:25:08 np0005464214 podman[222405]: 2025-10-01 13:25:08.880208102 +0000 UTC m=+1.432036968 container remove bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:25:08 np0005464214 systemd[1]: libpod-conmon-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope: Deactivated successfully.
Oct  1 09:25:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:25:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:25:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:25:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:25:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev cbd72fd2-90a0-4200-9192-73c4ab65af6c does not exist
Oct  1 09:25:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3567beec-00f1-428b-bee5-f5b74134a67e does not exist
Oct  1 09:25:09 np0005464214 systemd[1]: session-49.scope: Deactivated successfully.
Oct  1 09:25:09 np0005464214 systemd[1]: session-49.scope: Consumed 3min 44.937s CPU time.
Oct  1 09:25:09 np0005464214 systemd-logind[818]: Session 49 logged out. Waiting for processes to exit.
Oct  1 09:25:09 np0005464214 systemd-logind[818]: Removed session 49.
Oct  1 09:25:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:25:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:25:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:11 np0005464214 podman[222717]: 2025-10-01 13:25:11.567882333 +0000 UTC m=+0.114605169 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 09:25:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:25:12.290 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:25:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:25:12.291 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:25:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:25:12.292 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:25:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:14 np0005464214 systemd-logind[818]: New session 50 of user zuul.
Oct  1 09:25:14 np0005464214 systemd[1]: Started Session 50 of User zuul.
Oct  1 09:25:15 np0005464214 podman[222800]: 2025-10-01 13:25:15.540647628 +0000 UTC m=+0.089527158 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 09:25:16 np0005464214 python3.9[222916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:25:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:17 np0005464214 python3.9[223072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:25:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:25:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:25:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:25:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:25:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:25:17 np0005464214 python3.9[223224]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:18 np0005464214 python3.9[223376]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:19 np0005464214 python3.9[223528]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 09:25:19 np0005464214 python3.9[223680]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:20 np0005464214 python3.9[223832]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:25:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:22 np0005464214 python3.9[223986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:25:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:22 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:22 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:22 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:23 np0005464214 python3.9[224176]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:25:23 np0005464214 network[224193]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:25:23 np0005464214 network[224194]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:25:23 np0005464214 network[224195]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:25:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:28 np0005464214 python3.9[224469]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:25:29 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:29 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:29 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:30 np0005464214 python3.9[224656]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:25:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:31 np0005464214 python3.9[224808]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22 name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  1 09:25:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:32 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:25:32 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:25:33 np0005464214 podman[224820]: 2025-10-01 13:25:33.146246145 +0000 UTC m=+1.631618122 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct  1 09:25:33 np0005464214 podman[224880]: 2025-10-01 13:25:33.311555882 +0000 UTC m=+0.050247165 container create 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.3680] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct  1 09:25:33 np0005464214 podman[224880]: 2025-10-01 13:25:33.288908987 +0000 UTC m=+0.027600310 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct  1 09:25:33 np0005464214 kernel: podman0: port 1(veth0) entered blocking state
Oct  1 09:25:33 np0005464214 kernel: podman0: port 1(veth0) entered disabled state
Oct  1 09:25:33 np0005464214 kernel: veth0: entered allmulticast mode
Oct  1 09:25:33 np0005464214 kernel: veth0: entered promiscuous mode
Oct  1 09:25:33 np0005464214 kernel: podman0: port 1(veth0) entered blocking state
Oct  1 09:25:33 np0005464214 kernel: podman0: port 1(veth0) entered forwarding state
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.3972] device (veth0): carrier: link connected
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.3982] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4007] device (podman0): carrier: link connected
Oct  1 09:25:33 np0005464214 systemd-udevd[224904]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:25:33 np0005464214 systemd-udevd[224906]: Network interface NamePolicy= disabled on kernel command line.
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4316] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4333] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4347] device (podman0): Activation: starting connection 'podman0' (1ea7e56b-b21f-4308-a7e0-4e8eb0d4c775)
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4350] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4355] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4359] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4363] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  1 09:25:33 np0005464214 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  1 09:25:33 np0005464214 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4801] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4804] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.4815] device (podman0): Activation: successful, device activated.
Oct  1 09:25:33 np0005464214 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  1 09:25:33 np0005464214 systemd[1]: Started libpod-conmon-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4.scope.
Oct  1 09:25:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:25:33 np0005464214 podman[224880]: 2025-10-01 13:25:33.781308478 +0000 UTC m=+0.519999781 container init 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 09:25:33 np0005464214 podman[224880]: 2025-10-01 13:25:33.794812148 +0000 UTC m=+0.533503421 container start 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS)
Oct  1 09:25:33 np0005464214 podman[224880]: 2025-10-01 13:25:33.798559075 +0000 UTC m=+0.537250378 container attach 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:25:33 np0005464214 iscsid_config[225037]: iqn.1994-05.com.redhat:d708ef469d6#015
Oct  1 09:25:33 np0005464214 systemd[1]: libpod-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4.scope: Deactivated successfully.
Oct  1 09:25:33 np0005464214 podman[224880]: 2025-10-01 13:25:33.801979062 +0000 UTC m=+0.540670375 container died 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923)
Oct  1 09:25:33 np0005464214 kernel: podman0: port 1(veth0) entered disabled state
Oct  1 09:25:33 np0005464214 kernel: veth0 (unregistering): left allmulticast mode
Oct  1 09:25:33 np0005464214 kernel: veth0 (unregistering): left promiscuous mode
Oct  1 09:25:33 np0005464214 kernel: podman0: port 1(veth0) entered disabled state
Oct  1 09:25:33 np0005464214 NetworkManager[45411]: <info>  [1759325133.8622] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  1 09:25:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:34 np0005464214 systemd[1]: run-netns-netns\x2dfb9a2bda\x2d3d38\x2da405\x2d3108\x2d02651c9856ff.mount: Deactivated successfully.
Oct  1 09:25:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4-userdata-shm.mount: Deactivated successfully.
Oct  1 09:25:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f4f1d6217c96bdaf4ecb291d4656f9ef5700e94ff84ce84497a5769e84e77ff1-merged.mount: Deactivated successfully.
Oct  1 09:25:34 np0005464214 podman[224880]: 2025-10-01 13:25:34.340648943 +0000 UTC m=+1.079340226 container remove 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 09:25:34 np0005464214 python3.9[224808]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22 /usr/sbin/iscsi-iname
Oct  1 09:25:34 np0005464214 systemd[1]: libpod-conmon-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4.scope: Deactivated successfully.
Oct  1 09:25:34 np0005464214 python3.9[224808]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  1 09:25:35 np0005464214 python3.9[225280]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:36 np0005464214 python3.9[225403]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325134.6887205-119-248016027975968/.source.iscsi _original_basename=.z437gr7x follow=False checksum=cf00cb9257c28bd43e3d04701f5a37e8933c1dfb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:36 np0005464214 python3.9[225555]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:37 np0005464214 python3.9[225705]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:25:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:38 np0005464214 python3.9[225859]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:39 np0005464214 python3.9[226011]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:40 np0005464214 python3.9[226163]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:40 np0005464214 python3.9[226241]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:41 np0005464214 python3.9[226393]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:41 np0005464214 podman[226443]: 2025-10-01 13:25:41.903351161 +0000 UTC m=+0.096751544 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:25:42 np0005464214 python3.9[226489]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:42 np0005464214 python3.9[226649]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:43 np0005464214 python3.9[226801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:43 np0005464214 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  1 09:25:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:44 np0005464214 python3.9[226881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:45 np0005464214 python3.9[227033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:45 np0005464214 python3.9[227111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:46 np0005464214 podman[227235]: 2025-10-01 13:25:46.318647403 +0000 UTC m=+0.075952416 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 09:25:46 np0005464214 python3.9[227284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:25:46 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:46 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:46 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:25:47
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', 'images', 'default.rgw.log']
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:25:47 np0005464214 python3.9[227473]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:25:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:25:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:48 np0005464214 python3.9[227551]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:49 np0005464214 python3.9[227703]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:49 np0005464214 python3.9[227781]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:50 np0005464214 python3.9[227933]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:25:50 np0005464214 systemd[1]: Reloading.
Oct  1 09:25:50 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:25:50 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:25:51 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:25:51 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:25:51 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:25:51 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:25:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:52 np0005464214 python3.9[228126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:53 np0005464214 python3.9[228278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:53 np0005464214 python3.9[228401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325152.485664-273-96854921709342/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:54 np0005464214 python3.9[228553]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:25:55 np0005464214 python3.9[228705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:25:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:25:56 np0005464214 python3.9[228828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325154.963235-298-221307292650473/.source.json _original_basename=.uya_gf8w follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:25:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:25:57 np0005464214 python3.9[228980]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:25:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:25:59 np0005464214 python3.9[229409]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  1 09:26:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:00 np0005464214 python3.9[229561]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.318792) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161318848, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1762, "num_deletes": 250, "total_data_size": 2993291, "memory_usage": 3037544, "flush_reason": "Manual Compaction"}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161357403, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1683894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11837, "largest_seqno": 13598, "table_properties": {"data_size": 1678153, "index_size": 2880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14308, "raw_average_key_size": 20, "raw_value_size": 1665473, "raw_average_value_size": 2335, "num_data_blocks": 133, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324958, "oldest_key_time": 1759324958, "file_creation_time": 1759325161, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 38658 microseconds, and 5984 cpu microseconds.
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.357453) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1683894 bytes OK
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.357472) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.369643) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.369667) EVENT_LOG_v1 {"time_micros": 1759325161369660, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.369692) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2985819, prev total WAL file size 2985819, number of live WAL files 2.
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.370678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1644KB)], [29(7836KB)]
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161370765, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9708428, "oldest_snapshot_seqno": -1}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4031 keys, 7670468 bytes, temperature: kUnknown
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161436474, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7670468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7641362, "index_size": 17924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95923, "raw_average_key_size": 23, "raw_value_size": 7566543, "raw_average_value_size": 1877, "num_data_blocks": 779, "num_entries": 4031, "num_filter_entries": 4031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325161, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.436936) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7670468 bytes
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.440850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.4 rd, 116.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 4447, records dropped: 416 output_compression: NoCompression
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.440893) EVENT_LOG_v1 {"time_micros": 1759325161440874, "job": 12, "event": "compaction_finished", "compaction_time_micros": 65876, "compaction_time_cpu_micros": 19986, "output_level": 6, "num_output_files": 1, "total_output_size": 7670468, "num_input_records": 4447, "num_output_records": 4031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161441662, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161444628, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.370599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:26:01 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:26:02 np0005464214 python3.9[229715]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 09:26:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:03 np0005464214 python3[229894]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 09:26:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:04 np0005464214 podman[229932]: 2025-10-01 13:26:04.145610503 +0000 UTC m=+0.024142693 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct  1 09:26:04 np0005464214 podman[229932]: 2025-10-01 13:26:04.297534644 +0000 UTC m=+0.176066854 container create c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923)
Oct  1 09:26:04 np0005464214 python3[229894]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct  1 09:26:05 np0005464214 python3.9[230123]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:26:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:06 np0005464214 python3.9[230277]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:06 np0005464214 python3.9[230353]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:26:07 np0005464214 python3.9[230504]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759325166.78334-386-229266066832100/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:08 np0005464214 python3.9[230580]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:26:08 np0005464214 systemd[1]: Reloading.
Oct  1 09:26:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:08 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:26:08 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:26:09 np0005464214 python3.9[230691]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:26:09 np0005464214 systemd[1]: Reloading.
Oct  1 09:26:09 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:26:09 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:26:09 np0005464214 systemd[1]: Starting iscsid container...
Oct  1 09:26:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ef4560b82efa07bc6c8fef785be160e88f15c11a0780685435a1bc6e40f6db/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ef4560b82efa07bc6c8fef785be160e88f15c11a0780685435a1bc6e40f6db/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ef4560b82efa07bc6c8fef785be160e88f15c11a0780685435a1bc6e40f6db/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:09 np0005464214 systemd[1]: Started /usr/bin/podman healthcheck run c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d.
Oct  1 09:26:09 np0005464214 podman[230756]: 2025-10-01 13:26:09.689265237 +0000 UTC m=+0.163721179 container init c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:26:09 np0005464214 iscsid[230799]: + sudo -E kolla_set_configs
Oct  1 09:26:09 np0005464214 podman[230756]: 2025-10-01 13:26:09.718185807 +0000 UTC m=+0.192641749 container start c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:26:09 np0005464214 podman[230756]: iscsid
Oct  1 09:26:09 np0005464214 systemd[1]: Started iscsid container.
Oct  1 09:26:09 np0005464214 systemd[1]: Created slice User Slice of UID 0.
Oct  1 09:26:09 np0005464214 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  1 09:26:09 np0005464214 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  1 09:26:09 np0005464214 systemd[1]: Starting User Manager for UID 0...
Oct  1 09:26:09 np0005464214 podman[230834]: 2025-10-01 13:26:09.852889301 +0000 UTC m=+0.119247894 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct  1 09:26:09 np0005464214 systemd[1]: c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d-64b353af6c8de46d.service: Main process exited, code=exited, status=1/FAILURE
Oct  1 09:26:09 np0005464214 systemd[1]: c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d-64b353af6c8de46d.service: Failed with result 'exit-code'.
Oct  1 09:26:09 np0005464214 systemd[230898]: Queued start job for default target Main User Target.
Oct  1 09:26:10 np0005464214 systemd[230898]: Created slice User Application Slice.
Oct  1 09:26:10 np0005464214 systemd[230898]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  1 09:26:10 np0005464214 systemd[230898]: Started Daily Cleanup of User's Temporary Directories.
Oct  1 09:26:10 np0005464214 systemd[230898]: Reached target Paths.
Oct  1 09:26:10 np0005464214 systemd[230898]: Reached target Timers.
Oct  1 09:26:10 np0005464214 systemd[230898]: Starting D-Bus User Message Bus Socket...
Oct  1 09:26:10 np0005464214 systemd[230898]: Starting Create User's Volatile Files and Directories...
Oct  1 09:26:10 np0005464214 systemd[230898]: Finished Create User's Volatile Files and Directories.
Oct  1 09:26:10 np0005464214 systemd[230898]: Listening on D-Bus User Message Bus Socket.
Oct  1 09:26:10 np0005464214 systemd[230898]: Reached target Sockets.
Oct  1 09:26:10 np0005464214 systemd[230898]: Reached target Basic System.
Oct  1 09:26:10 np0005464214 systemd[230898]: Reached target Main User Target.
Oct  1 09:26:10 np0005464214 systemd[230898]: Startup finished in 167ms.
Oct  1 09:26:10 np0005464214 systemd[1]: Started User Manager for UID 0.
Oct  1 09:26:10 np0005464214 systemd[1]: Started Session c3 of User root.
Oct  1 09:26:10 np0005464214 iscsid[230799]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:26:10 np0005464214 iscsid[230799]: INFO:__main__:Validating config file
Oct  1 09:26:10 np0005464214 iscsid[230799]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:26:10 np0005464214 iscsid[230799]: INFO:__main__:Writing out command to execute
Oct  1 09:26:10 np0005464214 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  1 09:26:10 np0005464214 iscsid[230799]: ++ cat /run_command
Oct  1 09:26:10 np0005464214 iscsid[230799]: + CMD='/usr/sbin/iscsid -f'
Oct  1 09:26:10 np0005464214 iscsid[230799]: + ARGS=
Oct  1 09:26:10 np0005464214 iscsid[230799]: + sudo kolla_copy_cacerts
Oct  1 09:26:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:10 np0005464214 systemd[1]: Started Session c4 of User root.
Oct  1 09:26:10 np0005464214 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  1 09:26:10 np0005464214 iscsid[230799]: + [[ ! -n '' ]]
Oct  1 09:26:10 np0005464214 iscsid[230799]: + . kolla_extend_start
Oct  1 09:26:10 np0005464214 iscsid[230799]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  1 09:26:10 np0005464214 iscsid[230799]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  1 09:26:10 np0005464214 iscsid[230799]: Running command: '/usr/sbin/iscsid -f'
Oct  1 09:26:10 np0005464214 iscsid[230799]: + umask 0022
Oct  1 09:26:10 np0005464214 iscsid[230799]: + exec /usr/sbin/iscsid -f
Oct  1 09:26:10 np0005464214 kernel: Loading iSCSI transport class v2.0-870.
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:26:10 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9b841195-ffb1-47f6-a587-2d92df367a3c does not exist
Oct  1 09:26:10 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 92ff68c5-e5da-4017-b376-fe70843f9204 does not exist
Oct  1 09:26:10 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8724994f-f4f1-4803-9e1b-44e8bd6fb9b1 does not exist
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:26:10 np0005464214 python3.9[231083]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:26:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:26:11 np0005464214 podman[231372]: 2025-10-01 13:26:11.181122006 +0000 UTC m=+0.071832947 container create 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:26:11 np0005464214 systemd[1]: Started libpod-conmon-7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb.scope.
Oct  1 09:26:11 np0005464214 podman[231372]: 2025-10-01 13:26:11.151402851 +0000 UTC m=+0.042113802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:26:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:11 np0005464214 python3.9[231385]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:11 np0005464214 podman[231372]: 2025-10-01 13:26:11.433626888 +0000 UTC m=+0.324337839 container init 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:26:11 np0005464214 podman[231372]: 2025-10-01 13:26:11.44459613 +0000 UTC m=+0.335307061 container start 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:26:11 np0005464214 inspiring_boyd[231394]: 167 167
Oct  1 09:26:11 np0005464214 systemd[1]: libpod-7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb.scope: Deactivated successfully.
Oct  1 09:26:11 np0005464214 podman[231372]: 2025-10-01 13:26:11.52457478 +0000 UTC m=+0.415285741 container attach 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:26:11 np0005464214 podman[231372]: 2025-10-01 13:26:11.525272902 +0000 UTC m=+0.415983833 container died 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct  1 09:26:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a247247b4c1421999aaefe6332a1e47e5e647520daed3d49e37d6dbc55c705e7-merged.mount: Deactivated successfully.
Oct  1 09:26:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:26:12.292 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:26:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:26:12.293 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:26:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:26:12.293 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:26:12 np0005464214 python3.9[231574]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:26:12 np0005464214 podman[231372]: 2025-10-01 13:26:12.487383807 +0000 UTC m=+1.378094758 container remove 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:26:12 np0005464214 network[231592]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:26:12 np0005464214 network[231594]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:26:12 np0005464214 network[231595]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:26:12 np0005464214 systemd[1]: libpod-conmon-7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb.scope: Deactivated successfully.
Oct  1 09:26:12 np0005464214 podman[231510]: 2025-10-01 13:26:12.619981356 +0000 UTC m=+0.652460886 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 09:26:12 np0005464214 podman[231619]: 2025-10-01 13:26:12.683164013 +0000 UTC m=+0.031650756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:26:12 np0005464214 podman[231619]: 2025-10-01 13:26:12.82888852 +0000 UTC m=+0.177375283 container create 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:26:13 np0005464214 systemd[1]: Started libpod-conmon-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope.
Oct  1 09:26:13 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:13 np0005464214 podman[231619]: 2025-10-01 13:26:13.527558154 +0000 UTC m=+0.876044917 container init 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:26:13 np0005464214 podman[231619]: 2025-10-01 13:26:13.539484394 +0000 UTC m=+0.887971127 container start 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:26:13 np0005464214 podman[231619]: 2025-10-01 13:26:13.564564676 +0000 UTC m=+0.913051449 container attach 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:26:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:14 np0005464214 sharp_kalam[231637]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:26:14 np0005464214 sharp_kalam[231637]: --> relative data size: 1.0
Oct  1 09:26:14 np0005464214 sharp_kalam[231637]: --> All data devices are unavailable
Oct  1 09:26:14 np0005464214 podman[231619]: 2025-10-01 13:26:14.789481974 +0000 UTC m=+2.137968707 container died 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:26:14 np0005464214 systemd[1]: libpod-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope: Deactivated successfully.
Oct  1 09:26:14 np0005464214 systemd[1]: libpod-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope: Consumed 1.187s CPU time.
Oct  1 09:26:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612-merged.mount: Deactivated successfully.
Oct  1 09:26:15 np0005464214 podman[231619]: 2025-10-01 13:26:15.067863842 +0000 UTC m=+2.416350575 container remove 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:26:15 np0005464214 systemd[1]: libpod-conmon-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope: Deactivated successfully.
Oct  1 09:26:15 np0005464214 podman[231882]: 2025-10-01 13:26:15.842847891 +0000 UTC m=+0.097372813 container create 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:26:15 np0005464214 podman[231882]: 2025-10-01 13:26:15.767558177 +0000 UTC m=+0.022083109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:26:16 np0005464214 systemd[1]: Started libpod-conmon-52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e.scope.
Oct  1 09:26:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:16 np0005464214 podman[231882]: 2025-10-01 13:26:16.129207586 +0000 UTC m=+0.383732578 container init 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:26:16 np0005464214 podman[231882]: 2025-10-01 13:26:16.140315582 +0000 UTC m=+0.394840524 container start 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:26:16 np0005464214 relaxed_saha[231898]: 167 167
Oct  1 09:26:16 np0005464214 systemd[1]: libpod-52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e.scope: Deactivated successfully.
Oct  1 09:26:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:16 np0005464214 podman[231882]: 2025-10-01 13:26:16.211664474 +0000 UTC m=+0.466189416 container attach 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:26:16 np0005464214 podman[231882]: 2025-10-01 13:26:16.212158439 +0000 UTC m=+0.466683351 container died 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:26:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2ecbdf2071b0a683fa0c362aa8cdedc084cd7d5c0556e6b7b5b420486708ed14-merged.mount: Deactivated successfully.
Oct  1 09:26:16 np0005464214 podman[231882]: 2025-10-01 13:26:16.681386499 +0000 UTC m=+0.935911421 container remove 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:26:16 np0005464214 systemd[1]: libpod-conmon-52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e.scope: Deactivated successfully.
Oct  1 09:26:16 np0005464214 podman[231920]: 2025-10-01 13:26:16.786103309 +0000 UTC m=+0.418356687 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:26:16 np0005464214 podman[231966]: 2025-10-01 13:26:16.902168532 +0000 UTC m=+0.039874201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:26:17 np0005464214 podman[231966]: 2025-10-01 13:26:17.022237631 +0000 UTC m=+0.159943240 container create 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:26:17 np0005464214 systemd[1]: Started libpod-conmon-693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f.scope.
Oct  1 09:26:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:17 np0005464214 podman[231966]: 2025-10-01 13:26:17.192801562 +0000 UTC m=+0.330507221 container init 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:26:17 np0005464214 podman[231966]: 2025-10-01 13:26:17.203871297 +0000 UTC m=+0.341576876 container start 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:26:17 np0005464214 podman[231966]: 2025-10-01 13:26:17.223084605 +0000 UTC m=+0.360790204 container attach 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:26:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:26:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:26:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:26:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:26:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:26:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]: {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:    "0": [
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:        {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "devices": [
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "/dev/loop3"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            ],
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_name": "ceph_lv0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_size": "21470642176",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "name": "ceph_lv0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "tags": {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cluster_name": "ceph",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.crush_device_class": "",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.encrypted": "0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osd_id": "0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.type": "block",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.vdo": "0"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            },
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "type": "block",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "vg_name": "ceph_vg0"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:        }
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:    ],
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:    "1": [
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:        {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "devices": [
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "/dev/loop4"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            ],
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_name": "ceph_lv1",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_size": "21470642176",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "name": "ceph_lv1",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "tags": {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cluster_name": "ceph",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.crush_device_class": "",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.encrypted": "0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osd_id": "1",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.type": "block",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.vdo": "0"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            },
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "type": "block",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "vg_name": "ceph_vg1"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:        }
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:    ],
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:    "2": [
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:        {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "devices": [
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "/dev/loop5"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            ],
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_name": "ceph_lv2",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_size": "21470642176",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "name": "ceph_lv2",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "tags": {
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.cluster_name": "ceph",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.crush_device_class": "",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.encrypted": "0",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osd_id": "2",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.type": "block",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:                "ceph.vdo": "0"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            },
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "type": "block",
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:            "vg_name": "ceph_vg2"
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:        }
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]:    ]
Oct  1 09:26:18 np0005464214 vigorous_mcclintock[231993]: }
Oct  1 09:26:18 np0005464214 systemd[1]: libpod-693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f.scope: Deactivated successfully.
Oct  1 09:26:18 np0005464214 podman[231966]: 2025-10-01 13:26:18.058609749 +0000 UTC m=+1.196315348 container died 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:26:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391-merged.mount: Deactivated successfully.
Oct  1 09:26:18 np0005464214 podman[231966]: 2025-10-01 13:26:18.137534206 +0000 UTC m=+1.275239785 container remove 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 09:26:18 np0005464214 systemd[1]: libpod-conmon-693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f.scope: Deactivated successfully.
Oct  1 09:26:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:18 np0005464214 python3.9[232256]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:19.07871634 +0000 UTC m=+0.126572972 container create c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:18.992677282 +0000 UTC m=+0.040534004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:26:19 np0005464214 systemd[1]: Started libpod-conmon-c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f.scope.
Oct  1 09:26:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:19.266286281 +0000 UTC m=+0.314142933 container init c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:19.276105756 +0000 UTC m=+0.323962378 container start c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:26:19 np0005464214 crazy_haslett[232422]: 167 167
Oct  1 09:26:19 np0005464214 systemd[1]: libpod-c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f.scope: Deactivated successfully.
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:19.383263093 +0000 UTC m=+0.431119765 container attach c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:19.38380888 +0000 UTC m=+0.431665552 container died c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:26:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9f0302a3ea695d8c915b6ccd923f165dcb8f1fb3735308c6874a7ea158ffbfea-merged.mount: Deactivated successfully.
Oct  1 09:26:19 np0005464214 podman[232376]: 2025-10-01 13:26:19.562357969 +0000 UTC m=+0.610214631 container remove c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:26:19 np0005464214 systemd[1]: libpod-conmon-c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f.scope: Deactivated successfully.
Oct  1 09:26:19 np0005464214 python3.9[232513]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  1 09:26:19 np0005464214 podman[232522]: 2025-10-01 13:26:19.801484574 +0000 UTC m=+0.070557978 container create 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:26:19 np0005464214 podman[232522]: 2025-10-01 13:26:19.77150351 +0000 UTC m=+0.040576954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:26:19 np0005464214 systemd[1]: Started libpod-conmon-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope.
Oct  1 09:26:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:26:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:26:19 np0005464214 podman[232522]: 2025-10-01 13:26:19.941183594 +0000 UTC m=+0.210256968 container init 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:26:19 np0005464214 podman[232522]: 2025-10-01 13:26:19.955616953 +0000 UTC m=+0.224690317 container start 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:26:19 np0005464214 podman[232522]: 2025-10-01 13:26:19.95938592 +0000 UTC m=+0.228459284 container attach 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:26:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:20 np0005464214 systemd[1]: Stopping User Manager for UID 0...
Oct  1 09:26:20 np0005464214 systemd[230898]: Activating special unit Exit the Session...
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped target Main User Target.
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped target Basic System.
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped target Paths.
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped target Sockets.
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped target Timers.
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  1 09:26:20 np0005464214 systemd[230898]: Closed D-Bus User Message Bus Socket.
Oct  1 09:26:20 np0005464214 systemd[230898]: Stopped Create User's Volatile Files and Directories.
Oct  1 09:26:20 np0005464214 systemd[230898]: Removed slice User Application Slice.
Oct  1 09:26:20 np0005464214 systemd[230898]: Reached target Shutdown.
Oct  1 09:26:20 np0005464214 systemd[230898]: Finished Exit the Session.
Oct  1 09:26:20 np0005464214 systemd[230898]: Reached target Exit the Session.
Oct  1 09:26:20 np0005464214 systemd[1]: user@0.service: Deactivated successfully.
Oct  1 09:26:20 np0005464214 systemd[1]: Stopped User Manager for UID 0.
Oct  1 09:26:20 np0005464214 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  1 09:26:20 np0005464214 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  1 09:26:20 np0005464214 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  1 09:26:20 np0005464214 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  1 09:26:20 np0005464214 systemd[1]: Removed slice User Slice of UID 0.
Oct  1 09:26:20 np0005464214 python3.9[232699]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]: {
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "osd_id": 0,
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "type": "bluestore"
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:    },
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "osd_id": 2,
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "type": "bluestore"
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:    },
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "osd_id": 1,
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:        "type": "bluestore"
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]:    }
Oct  1 09:26:20 np0005464214 fervent_almeida[232548]: }
Oct  1 09:26:21 np0005464214 systemd[1]: libpod-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope: Deactivated successfully.
Oct  1 09:26:21 np0005464214 systemd[1]: libpod-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope: Consumed 1.075s CPU time.
Oct  1 09:26:21 np0005464214 podman[232522]: 2025-10-01 13:26:21.033819834 +0000 UTC m=+1.302893238 container died 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:26:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280-merged.mount: Deactivated successfully.
Oct  1 09:26:21 np0005464214 podman[232522]: 2025-10-01 13:26:21.124566319 +0000 UTC m=+1.393639693 container remove 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:26:21 np0005464214 systemd[1]: libpod-conmon-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope: Deactivated successfully.
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:26:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b1771bd7-3b7d-4976-a959-9a04e3a68dcf does not exist
Oct  1 09:26:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e7e689cf-262c-45fc-9b4d-4df34bf075db does not exist
Oct  1 09:26:21 np0005464214 python3.9[232851]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325180.0600505-460-46736324303189/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:26:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:26:22 np0005464214 python3.9[233065]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:22 np0005464214 python3.9[233217]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:26:23 np0005464214 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  1 09:26:23 np0005464214 systemd[1]: Stopped Load Kernel Modules.
Oct  1 09:26:23 np0005464214 systemd[1]: Stopping Load Kernel Modules...
Oct  1 09:26:23 np0005464214 systemd[1]: Starting Load Kernel Modules...
Oct  1 09:26:23 np0005464214 systemd[1]: Finished Load Kernel Modules.
Oct  1 09:26:23 np0005464214 python3.9[233373]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:24 np0005464214 python3.9[233525]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:26:25 np0005464214 python3.9[233677]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:26:25 np0005464214 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  1 09:26:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:26 np0005464214 python3.9[233830]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:26 np0005464214 python3.9[233953]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325185.686633-518-161029049689613/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:27 np0005464214 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  1 09:26:27 np0005464214 python3.9[234106]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:26:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:28 np0005464214 python3.9[234259]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:29 np0005464214 python3.9[234411]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:30 np0005464214 python3.9[234563]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:31 np0005464214 python3.9[234715]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:31 np0005464214 python3.9[234867]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:32 np0005464214 python3.9[235019]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:33 np0005464214 python3.9[235171]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:34 np0005464214 python3.9[235323]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:26:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:34 np0005464214 python3.9[235477]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:35 np0005464214 python3.9[235629]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:36 np0005464214 python3.9[235781]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:37 np0005464214 python3.9[235859]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:38 np0005464214 python3.9[236011]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:38 np0005464214 python3.9[236089]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:39 np0005464214 python3.9[236241]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:39 np0005464214 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  1 09:26:39 np0005464214 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  1 09:26:40 np0005464214 podman[236367]: 2025-10-01 13:26:40.140142857 +0000 UTC m=+0.112930818 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:26:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:40 np0005464214 python3.9[236414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:40 np0005464214 python3.9[236493]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:41 np0005464214 python3.9[236645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:42 np0005464214 python3.9[236723]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:43 np0005464214 python3.9[236875]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:26:43 np0005464214 systemd[1]: Reloading.
Oct  1 09:26:43 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:26:43 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:26:44 np0005464214 podman[236912]: 2025-10-01 13:26:44.075403862 +0000 UTC m=+0.145113609 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct  1 09:26:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:44 np0005464214 python3.9[237090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:45 np0005464214 python3.9[237168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:46 np0005464214 python3.9[237320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:46 np0005464214 podman[237398]: 2025-10-01 13:26:46.943531952 +0000 UTC m=+0.096214778 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  1 09:26:47 np0005464214 python3.9[237399]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:26:47
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'images', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:26:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:26:48 np0005464214 python3.9[237572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:26:48 np0005464214 systemd[1]: Reloading.
Oct  1 09:26:48 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:26:48 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:26:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:49 np0005464214 systemd[1]: Starting Create netns directory...
Oct  1 09:26:49 np0005464214 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  1 09:26:49 np0005464214 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  1 09:26:49 np0005464214 systemd[1]: Finished Create netns directory.
Oct  1 09:26:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:50 np0005464214 python3.9[237764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:51 np0005464214 python3.9[237916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:52 np0005464214 python3.9[238039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325211.0331264-725-6136777622655/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:53 np0005464214 python3.9[238191]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:26:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:54 np0005464214 python3.9[238343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:26:55 np0005464214 python3.9[238466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325213.7152958-750-140931777289309/.source.json _original_basename=.yqgouhp8 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:55 np0005464214 python3.9[238618]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:26:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:26:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:26:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:26:58 np0005464214 python3.9[239045]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  1 09:26:59 np0005464214 python3.9[239197]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 09:27:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:00 np0005464214 python3.9[239349]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  1 09:27:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:02 np0005464214 python3[239528]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 09:27:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:07 np0005464214 podman[239542]: 2025-10-01 13:27:07.396198318 +0000 UTC m=+4.594907541 image pull 80aeb93432d60c5f52c5325081f51dbf5658fe1615083ed284852e8f6df43250 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22
Oct  1 09:27:07 np0005464214 podman[239600]: 2025-10-01 13:27:07.608432445 +0000 UTC m=+0.045941990 image pull 80aeb93432d60c5f52c5325081f51dbf5658fe1615083ed284852e8f6df43250 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22
Oct  1 09:27:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:08 np0005464214 podman[239600]: 2025-10-01 13:27:08.505912387 +0000 UTC m=+0.943421902 container create a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct  1 09:27:08 np0005464214 python3[239528]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22
Oct  1 09:27:09 np0005464214 python3.9[239791]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:27:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:10 np0005464214 podman[239917]: 2025-10-01 13:27:10.525083885 +0000 UTC m=+0.088168503 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:27:10 np0005464214 python3.9[239965]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:11 np0005464214 python3.9[240042]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:27:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:12 np0005464214 python3.9[240195]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759325231.5211542-838-266239700217891/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:27:12.293 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:27:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:27:12.295 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:27:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:27:12.295 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:27:13 np0005464214 python3.9[240271]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:27:13 np0005464214 systemd[1]: Reloading.
Oct  1 09:27:13 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:27:13 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:27:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:14 np0005464214 python3.9[240381]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:27:14 np0005464214 systemd[1]: Reloading.
Oct  1 09:27:14 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:27:14 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:27:14 np0005464214 podman[240384]: 2025-10-01 13:27:14.476915483 +0000 UTC m=+0.190484422 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct  1 09:27:14 np0005464214 systemd[1]: Starting multipathd container...
Oct  1 09:27:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:17 np0005464214 systemd[1]: Started /usr/bin/podman healthcheck run a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.
Oct  1 09:27:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:27:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:27:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:27:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:27:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:27:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:27:18 np0005464214 podman[240448]: 2025-10-01 13:27:18.188185207 +0000 UTC m=+3.337724178 container init a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:27:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:18 np0005464214 multipathd[240463]: + sudo -E kolla_set_configs
Oct  1 09:27:18 np0005464214 podman[240448]: 2025-10-01 13:27:18.230900125 +0000 UTC m=+3.380439036 container start a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:27:18 np0005464214 multipathd[240463]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:27:18 np0005464214 multipathd[240463]: INFO:__main__:Validating config file
Oct  1 09:27:18 np0005464214 multipathd[240463]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:27:18 np0005464214 multipathd[240463]: INFO:__main__:Writing out command to execute
Oct  1 09:27:18 np0005464214 multipathd[240463]: ++ cat /run_command
Oct  1 09:27:18 np0005464214 multipathd[240463]: + CMD='/usr/sbin/multipathd -d'
Oct  1 09:27:18 np0005464214 multipathd[240463]: + ARGS=
Oct  1 09:27:18 np0005464214 multipathd[240463]: + sudo kolla_copy_cacerts
Oct  1 09:27:18 np0005464214 multipathd[240463]: + [[ ! -n '' ]]
Oct  1 09:27:18 np0005464214 multipathd[240463]: + . kolla_extend_start
Oct  1 09:27:18 np0005464214 multipathd[240463]: Running command: '/usr/sbin/multipathd -d'
Oct  1 09:27:18 np0005464214 multipathd[240463]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  1 09:27:18 np0005464214 multipathd[240463]: + umask 0022
Oct  1 09:27:18 np0005464214 multipathd[240463]: + exec /usr/sbin/multipathd -d
Oct  1 09:27:18 np0005464214 multipathd[240463]: 7799.120582 | --------start up--------
Oct  1 09:27:18 np0005464214 multipathd[240463]: 7799.120609 | read /etc/multipath.conf
Oct  1 09:27:18 np0005464214 multipathd[240463]: 7799.130860 | path checkers start up
Oct  1 09:27:19 np0005464214 podman[240448]: multipathd
Oct  1 09:27:19 np0005464214 systemd[1]: Started multipathd container.
Oct  1 09:27:19 np0005464214 podman[240480]: 2025-10-01 13:27:19.166685506 +0000 UTC m=+0.916715260 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 09:27:19 np0005464214 podman[240466]: 2025-10-01 13:27:19.203863438 +0000 UTC m=+2.109071445 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:20 np0005464214 python3.9[240671]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:27:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:20 np0005464214 python3.9[240825]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:27:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:22 np0005464214 python3.9[241038]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:27:22 np0005464214 systemd[1]: Stopping multipathd container...
Oct  1 09:27:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:27:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:27:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:27:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:27:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:27:22 np0005464214 multipathd[240463]: 7803.717939 | exit (signal)
Oct  1 09:27:22 np0005464214 multipathd[240463]: 7803.718037 | --------shut down-------
Oct  1 09:27:23 np0005464214 systemd[1]: libpod-a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.scope: Deactivated successfully.
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:27:23 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 96fd9f62-7e58-4662-92c6-b76ee4c603ac does not exist
Oct  1 09:27:23 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2cf771a6-62c0-4b4e-bba2-793946cc3209 does not exist
Oct  1 09:27:23 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5c4e4c87-d69b-4bcf-b76c-c081f9eba861 does not exist
Oct  1 09:27:23 np0005464214 podman[241112]: 2025-10-01 13:27:23.029801412 +0000 UTC m=+0.810365804 container died a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:27:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:27:23 np0005464214 systemd[1]: a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1-79bbfdc24022dde.timer: Deactivated successfully.
Oct  1 09:27:23 np0005464214 systemd[1]: Stopped /usr/bin/podman healthcheck run a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.
Oct  1 09:27:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed-merged.mount: Deactivated successfully.
Oct  1 09:27:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1-userdata-shm.mount: Deactivated successfully.
Oct  1 09:27:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:27:24 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:27:24 np0005464214 podman[241112]: 2025-10-01 13:27:24.334392229 +0000 UTC m=+2.114956661 container cleanup a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  1 09:27:24 np0005464214 podman[241112]: multipathd
Oct  1 09:27:24 np0005464214 podman[241282]: multipathd
Oct  1 09:27:24 np0005464214 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  1 09:27:24 np0005464214 systemd[1]: Stopped multipathd container.
Oct  1 09:27:24 np0005464214 systemd[1]: Starting multipathd container...
Oct  1 09:27:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:24 np0005464214 podman[241317]: 2025-10-01 13:27:24.740794484 +0000 UTC m=+0.214221221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:27:24 np0005464214 podman[241317]: 2025-10-01 13:27:24.867723779 +0000 UTC m=+0.341150456 container create 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:24 np0005464214 systemd[1]: Started libpod-conmon-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope.
Oct  1 09:27:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:25 np0005464214 systemd[1]: Started /usr/bin/podman healthcheck run a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.
Oct  1 09:27:25 np0005464214 podman[241317]: 2025-10-01 13:27:25.140297151 +0000 UTC m=+0.613723798 container init 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:25 np0005464214 podman[241317]: 2025-10-01 13:27:25.149672877 +0000 UTC m=+0.623099524 container start 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:27:25 np0005464214 systemd[1]: libpod-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope: Deactivated successfully.
Oct  1 09:27:25 np0005464214 upbeat_kowalevski[241344]: 167 167
Oct  1 09:27:25 np0005464214 conmon[241344]: conmon 252ecce36cea1ebba698 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope/container/memory.events
Oct  1 09:27:25 np0005464214 podman[241317]: 2025-10-01 13:27:25.246905265 +0000 UTC m=+0.720332002 container attach 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:27:25 np0005464214 podman[241317]: 2025-10-01 13:27:25.247656279 +0000 UTC m=+0.721082966 container died 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 09:27:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f0db6d8eb8494bc85ed162e6f996377c018e9de3936776670b59378a0339a79c-merged.mount: Deactivated successfully.
Oct  1 09:27:26 np0005464214 podman[241317]: 2025-10-01 13:27:26.168677134 +0000 UTC m=+1.642103791 container remove 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:27:26 np0005464214 systemd[1]: libpod-conmon-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope: Deactivated successfully.
Oct  1 09:27:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:26 np0005464214 podman[241296]: 2025-10-01 13:27:26.513545556 +0000 UTC m=+2.044709955 container init a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  1 09:27:26 np0005464214 multipathd[241336]: + sudo -E kolla_set_configs
Oct  1 09:27:26 np0005464214 podman[241296]: 2025-10-01 13:27:26.558582067 +0000 UTC m=+2.089746416 container start a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 09:27:26 np0005464214 multipathd[241336]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:27:26 np0005464214 multipathd[241336]: INFO:__main__:Validating config file
Oct  1 09:27:26 np0005464214 multipathd[241336]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:27:26 np0005464214 multipathd[241336]: INFO:__main__:Writing out command to execute
Oct  1 09:27:26 np0005464214 multipathd[241336]: ++ cat /run_command
Oct  1 09:27:26 np0005464214 multipathd[241336]: + CMD='/usr/sbin/multipathd -d'
Oct  1 09:27:26 np0005464214 multipathd[241336]: + ARGS=
Oct  1 09:27:26 np0005464214 multipathd[241336]: + sudo kolla_copy_cacerts
Oct  1 09:27:26 np0005464214 podman[241369]: 2025-10-01 13:27:26.566659042 +0000 UTC m=+0.209957407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:27:26 np0005464214 multipathd[241336]: + [[ ! -n '' ]]
Oct  1 09:27:26 np0005464214 multipathd[241336]: + . kolla_extend_start
Oct  1 09:27:26 np0005464214 multipathd[241336]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  1 09:27:26 np0005464214 multipathd[241336]: Running command: '/usr/sbin/multipathd -d'
Oct  1 09:27:26 np0005464214 multipathd[241336]: + umask 0022
Oct  1 09:27:26 np0005464214 multipathd[241336]: + exec /usr/sbin/multipathd -d
Oct  1 09:27:26 np0005464214 multipathd[241336]: 7807.418257 | --------start up--------
Oct  1 09:27:26 np0005464214 multipathd[241336]: 7807.418282 | read /etc/multipath.conf
Oct  1 09:27:26 np0005464214 multipathd[241336]: 7807.424693 | path checkers start up
Oct  1 09:27:26 np0005464214 podman[241296]: multipathd
Oct  1 09:27:26 np0005464214 systemd[1]: Started multipathd container.
Oct  1 09:27:26 np0005464214 podman[241369]: 2025-10-01 13:27:26.872787373 +0000 UTC m=+0.516085658 container create 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:27:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:27 np0005464214 systemd[1]: Started libpod-conmon-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope.
Oct  1 09:27:27 np0005464214 podman[241384]: 2025-10-01 13:27:27.022860678 +0000 UTC m=+0.450251159 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true)
Oct  1 09:27:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:27 np0005464214 podman[241369]: 2025-10-01 13:27:27.182130995 +0000 UTC m=+0.825429310 container init 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:27:27 np0005464214 podman[241369]: 2025-10-01 13:27:27.19436512 +0000 UTC m=+0.837663395 container start 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:27:27 np0005464214 podman[241369]: 2025-10-01 13:27:27.314844362 +0000 UTC m=+0.958142677 container attach 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:27:27 np0005464214 python3.9[241576]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:28 np0005464214 laughing_merkle[241496]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:27:28 np0005464214 laughing_merkle[241496]: --> relative data size: 1.0
Oct  1 09:27:28 np0005464214 laughing_merkle[241496]: --> All data devices are unavailable
Oct  1 09:27:28 np0005464214 systemd[1]: libpod-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope: Deactivated successfully.
Oct  1 09:27:28 np0005464214 systemd[1]: libpod-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope: Consumed 1.044s CPU time.
Oct  1 09:27:28 np0005464214 podman[241369]: 2025-10-01 13:27:28.297932726 +0000 UTC m=+1.941231041 container died 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:27:28 np0005464214 python3.9[241747]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  1 09:27:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92-merged.mount: Deactivated successfully.
Oct  1 09:27:29 np0005464214 python3.9[241915]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  1 09:27:29 np0005464214 kernel: Key type psk registered
Oct  1 09:27:29 np0005464214 podman[241369]: 2025-10-01 13:27:29.526944069 +0000 UTC m=+3.170242354 container remove 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:27:29 np0005464214 systemd[1]: libpod-conmon-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope: Deactivated successfully.
Oct  1 09:27:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:30 np0005464214 python3.9[242189]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:27:30 np0005464214 podman[242216]: 2025-10-01 13:27:30.375021731 +0000 UTC m=+0.121311749 container create 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:27:30 np0005464214 podman[242216]: 2025-10-01 13:27:30.279026952 +0000 UTC m=+0.025316960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:27:30 np0005464214 systemd[1]: Started libpod-conmon-2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11.scope.
Oct  1 09:27:30 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:30 np0005464214 podman[242216]: 2025-10-01 13:27:30.661986978 +0000 UTC m=+0.408276976 container init 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:27:30 np0005464214 podman[242216]: 2025-10-01 13:27:30.67316588 +0000 UTC m=+0.419455888 container start 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:30 np0005464214 determined_faraday[242268]: 167 167
Oct  1 09:27:30 np0005464214 systemd[1]: libpod-2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11.scope: Deactivated successfully.
Oct  1 09:27:30 np0005464214 podman[242216]: 2025-10-01 13:27:30.820175559 +0000 UTC m=+0.566465537 container attach 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:27:30 np0005464214 podman[242216]: 2025-10-01 13:27:30.820545091 +0000 UTC m=+0.566835079 container died 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:27:30 np0005464214 python3.9[242371]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325249.6917362-918-175546479693161/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ae2eabcfbedb149c11cc71650d0014a38388fd139ec9a93f17721a485049e915-merged.mount: Deactivated successfully.
Oct  1 09:27:31 np0005464214 podman[242216]: 2025-10-01 13:27:31.816277352 +0000 UTC m=+1.562567330 container remove 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:27:31 np0005464214 systemd[1]: libpod-conmon-2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11.scope: Deactivated successfully.
Oct  1 09:27:31 np0005464214 python3.9[242524]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:32 np0005464214 podman[242556]: 2025-10-01 13:27:32.017564724 +0000 UTC m=+0.033272771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:27:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:32 np0005464214 podman[242556]: 2025-10-01 13:27:32.227461528 +0000 UTC m=+0.243169555 container create b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 09:27:32 np0005464214 systemd[1]: Started libpod-conmon-b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201.scope.
Oct  1 09:27:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:32 np0005464214 podman[242556]: 2025-10-01 13:27:32.532466263 +0000 UTC m=+0.548174360 container init b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:27:32 np0005464214 podman[242556]: 2025-10-01 13:27:32.543027497 +0000 UTC m=+0.558735554 container start b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:27:32 np0005464214 python3.9[242703]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:27:32 np0005464214 podman[242556]: 2025-10-01 13:27:32.823995813 +0000 UTC m=+0.839703900 container attach b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:27:32 np0005464214 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  1 09:27:32 np0005464214 systemd[1]: Stopped Load Kernel Modules.
Oct  1 09:27:32 np0005464214 systemd[1]: Stopping Load Kernel Modules...
Oct  1 09:27:32 np0005464214 systemd[1]: Starting Load Kernel Modules...
Oct  1 09:27:32 np0005464214 systemd[1]: Finished Load Kernel Modules.
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]: {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:    "0": [
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:        {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "devices": [
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "/dev/loop3"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            ],
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_name": "ceph_lv0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_size": "21470642176",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "name": "ceph_lv0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "tags": {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cluster_name": "ceph",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.crush_device_class": "",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.encrypted": "0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osd_id": "0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.type": "block",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.vdo": "0"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            },
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "type": "block",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "vg_name": "ceph_vg0"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:        }
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:    ],
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:    "1": [
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:        {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "devices": [
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "/dev/loop4"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            ],
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_name": "ceph_lv1",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_size": "21470642176",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "name": "ceph_lv1",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "tags": {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cluster_name": "ceph",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.crush_device_class": "",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.encrypted": "0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osd_id": "1",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.type": "block",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.vdo": "0"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            },
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "type": "block",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "vg_name": "ceph_vg1"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:        }
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:    ],
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:    "2": [
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:        {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "devices": [
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "/dev/loop5"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            ],
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_name": "ceph_lv2",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_size": "21470642176",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "name": "ceph_lv2",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "tags": {
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.cluster_name": "ceph",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.crush_device_class": "",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.encrypted": "0",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osd_id": "2",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.type": "block",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:                "ceph.vdo": "0"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            },
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "type": "block",
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:            "vg_name": "ceph_vg2"
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:        }
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]:    ]
Oct  1 09:27:33 np0005464214 nice_ramanujan[242652]: }
Oct  1 09:27:33 np0005464214 systemd[1]: libpod-b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201.scope: Deactivated successfully.
Oct  1 09:27:33 np0005464214 podman[242556]: 2025-10-01 13:27:33.424028968 +0000 UTC m=+1.439737075 container died b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:27:33 np0005464214 python3.9[242875]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  1 09:27:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e-merged.mount: Deactivated successfully.
Oct  1 09:27:34 np0005464214 podman[242556]: 2025-10-01 13:27:34.848983065 +0000 UTC m=+2.864691092 container remove b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:27:34 np0005464214 systemd[1]: libpod-conmon-b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201.scope: Deactivated successfully.
Oct  1 09:27:35 np0005464214 python3.9[242961]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  1 09:27:35 np0005464214 podman[243102]: 2025-10-01 13:27:35.75641194 +0000 UTC m=+0.041279513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:27:35 np0005464214 podman[243102]: 2025-10-01 13:27:35.92340703 +0000 UTC m=+0.208274583 container create fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:27:36 np0005464214 systemd[1]: Started libpod-conmon-fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48.scope.
Oct  1 09:27:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:36 np0005464214 podman[243102]: 2025-10-01 13:27:36.321146381 +0000 UTC m=+0.606014024 container init fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:27:36 np0005464214 podman[243102]: 2025-10-01 13:27:36.335828785 +0000 UTC m=+0.620696378 container start fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:27:36 np0005464214 optimistic_jennings[243119]: 167 167
Oct  1 09:27:36 np0005464214 systemd[1]: libpod-fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48.scope: Deactivated successfully.
Oct  1 09:27:36 np0005464214 podman[243102]: 2025-10-01 13:27:36.477724942 +0000 UTC m=+0.762592585 container attach fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:36 np0005464214 podman[243102]: 2025-10-01 13:27:36.478409574 +0000 UTC m=+0.763277157 container died fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:27:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7059930d084f7d456ca38a3adfb1d49810e4a0bfe063d0818dc6a263ffb4d496-merged.mount: Deactivated successfully.
Oct  1 09:27:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:38 np0005464214 podman[243102]: 2025-10-01 13:27:38.792656874 +0000 UTC m=+3.077524477 container remove fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:27:38 np0005464214 systemd[1]: libpod-conmon-fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48.scope: Deactivated successfully.
Oct  1 09:27:39 np0005464214 podman[243145]: 2025-10-01 13:27:38.995070602 +0000 UTC m=+0.034512411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:27:39 np0005464214 podman[243145]: 2025-10-01 13:27:39.55811662 +0000 UTC m=+0.597558329 container create 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:27:40 np0005464214 systemd[1]: Started libpod-conmon-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope.
Oct  1 09:27:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:27:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:27:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:40 np0005464214 podman[243145]: 2025-10-01 13:27:40.248791375 +0000 UTC m=+1.288233124 container init 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:27:40 np0005464214 podman[243145]: 2025-10-01 13:27:40.259446801 +0000 UTC m=+1.298888550 container start 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:40 np0005464214 podman[243145]: 2025-10-01 13:27:40.434062381 +0000 UTC m=+1.473504110 container attach 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]: {
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "osd_id": 0,
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "type": "bluestore"
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:    },
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "osd_id": 2,
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "type": "bluestore"
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:    },
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "osd_id": 1,
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:        "type": "bluestore"
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]:    }
Oct  1 09:27:41 np0005464214 hungry_kirch[243162]: }
Oct  1 09:27:41 np0005464214 systemd[1]: libpod-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope: Deactivated successfully.
Oct  1 09:27:41 np0005464214 podman[243145]: 2025-10-01 13:27:41.451748136 +0000 UTC m=+2.491189845 container died 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:27:41 np0005464214 systemd[1]: libpod-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope: Consumed 1.174s CPU time.
Oct  1 09:27:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634-merged.mount: Deactivated successfully.
Oct  1 09:27:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:45 np0005464214 podman[243145]: 2025-10-01 13:27:45.461884492 +0000 UTC m=+6.501326241 container remove 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:27:45 np0005464214 podman[243195]: 2025-10-01 13:27:45.47100187 +0000 UTC m=+4.008228877 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:27:45 np0005464214 systemd[1]: libpod-conmon-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope: Deactivated successfully.
Oct  1 09:27:45 np0005464214 podman[243233]: 2025-10-01 13:27:45.578628826 +0000 UTC m=+0.126641057 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:27:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:27:45 np0005464214 systemd[1]: Reloading.
Oct  1 09:27:45 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:27:45 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:27:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:46 np0005464214 systemd[1]: Reloading.
Oct  1 09:27:46 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:27:46 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:27:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:27:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:27:46 np0005464214 systemd-logind[818]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  1 09:27:46 np0005464214 systemd-logind[818]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  1 09:27:47 np0005464214 lvm[243368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 09:27:47 np0005464214 lvm[243367]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 09:27:47 np0005464214 lvm[243367]: VG ceph_vg2 finished
Oct  1 09:27:47 np0005464214 lvm[243368]: VG ceph_vg1 finished
Oct  1 09:27:47 np0005464214 lvm[243369]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 09:27:47 np0005464214 lvm[243369]: VG ceph_vg0 finished
Oct  1 09:27:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 26b6f65d-c9ab-4f5e-a751-8c9f852fcf5a does not exist
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b440f93d-9367-4f24-9f39-53f508b8887f does not exist
Oct  1 09:27:47 np0005464214 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  1 09:27:47 np0005464214 systemd[1]: Starting man-db-cache-update.service...
Oct  1 09:27:47 np0005464214 systemd[1]: Reloading.
Oct  1 09:27:47 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:27:47 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:27:47
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'backups', '.rgw.root', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:27:47 np0005464214 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:27:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:27:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:27:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:27:49 np0005464214 podman[243571]: 2025-10-01 13:27:49.510931527 +0000 UTC m=+0.072266512 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:27:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:51 np0005464214 python3.9[244784]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:52 np0005464214 python3.9[244934]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  1 09:27:53 np0005464214 python3.9[245092]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:27:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:55 np0005464214 python3.9[245244]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:27:55 np0005464214 systemd[1]: Reloading.
Oct  1 09:27:55 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:27:55 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:27:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:56 np0005464214 python3.9[245429]: ansible-ansible.builtin.service_facts Invoked
Oct  1 09:27:56 np0005464214 network[245446]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  1 09:27:56 np0005464214 network[245447]: 'network-scripts' will be removed from distribution in near future.
Oct  1 09:27:56 np0005464214 network[245448]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  1 09:27:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:27:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:27:57 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  1 09:27:57 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:57.815192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:27:57 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  1 09:27:57 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325277815244, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1392, "num_deletes": 506, "total_data_size": 1747996, "memory_usage": 1774944, "flush_reason": "Manual Compaction"}
Oct  1 09:27:57 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  1 09:27:57 np0005464214 podman[245454]: 2025-10-01 13:27:57.875328958 +0000 UTC m=+0.133890606 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd)
Oct  1 09:27:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325278431180, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1720871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13599, "largest_seqno": 14990, "table_properties": {"data_size": 1714771, "index_size": 2919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15195, "raw_average_key_size": 18, "raw_value_size": 1700576, "raw_average_value_size": 2019, "num_data_blocks": 134, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325161, "oldest_key_time": 1759325161, "file_creation_time": 1759325277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 616150 microseconds, and 8431 cpu microseconds.
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.431342) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1720871 bytes OK
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.431397) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.754389) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.754457) EVENT_LOG_v1 {"time_micros": 1759325278754440, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.754497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1740722, prev total WAL file size 1741877, number of live WAL files 2.
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.756600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1680KB)], [32(7490KB)]
Oct  1 09:27:58 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325278756664, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9391339, "oldest_snapshot_seqno": -1}
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3848 keys, 7449934 bytes, temperature: kUnknown
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325279120359, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7449934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7421909, "index_size": 17291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94173, "raw_average_key_size": 24, "raw_value_size": 7349979, "raw_average_value_size": 1910, "num_data_blocks": 732, "num_entries": 3848, "num_filter_entries": 3848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.120825) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7449934 bytes
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.938090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.8 rd, 20.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.3 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.8) write-amplify(4.3) OK, records in: 4873, records dropped: 1025 output_compression: NoCompression
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.938160) EVENT_LOG_v1 {"time_micros": 1759325279938133, "job": 14, "event": "compaction_finished", "compaction_time_micros": 363843, "compaction_time_cpu_micros": 36062, "output_level": 6, "num_output_files": 1, "total_output_size": 7449934, "num_input_records": 4873, "num_output_records": 3848, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325279939412, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325279942644, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.756431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:27:59 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:28:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:28:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3309 writes, 14K keys, 3309 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3308 writes, 3308 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1274 writes, 5793 keys, 1274 commit groups, 1.0 writes per commit group, ingest: 8.49 MB, 0.01 MB/s#012Interval WAL: 1273 writes, 1273 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.6      0.96              0.06         7    0.138       0      0       0.0       0.0#012  L6      1/0    7.10 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     65.6     54.1      0.78              0.15         6    0.129     24K   3202       0.0       0.0#012 Sum      1/0    7.10 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     29.3     33.3      1.74              0.20        13    0.134     24K   3202       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     26.3     26.5      1.34              0.13         8    0.167     17K   2469       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     65.6     54.1      0.78              0.15         6    0.129     24K   3202       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.95              0.06         6    0.159       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 1.7 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 308.00 MB usage: 1.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(103,1.43 MB,0.463624%) FilterBlock(14,75.80 KB,0.0240326%) IndexBlock(14,153.55 KB,0.0486845%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:28:01 np0005464214 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  1 09:28:01 np0005464214 systemd[1]: Finished man-db-cache-update.service.
Oct  1 09:28:01 np0005464214 systemd[1]: man-db-cache-update.service: Consumed 2.357s CPU time.
Oct  1 09:28:01 np0005464214 systemd[1]: run-r2af6d32c1b43432bb467b78951e1f15e.service: Deactivated successfully.
Oct  1 09:28:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:02 np0005464214 python3.9[245746]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:03 np0005464214 python3.9[245899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:04 np0005464214 python3.9[246052]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:04 np0005464214 python3.9[246205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:05 np0005464214 python3.9[246358]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:06 np0005464214 python3.9[246511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:07 np0005464214 python3.9[246664]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:08 np0005464214 python3.9[246817]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:28:09 np0005464214 python3.9[246970]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:10 np0005464214 python3.9[247122]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:11 np0005464214 python3.9[247274]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:11 np0005464214 python3.9[247426]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:28:12.294 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:28:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:28:12.296 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:28:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:28:12.297 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:28:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:12 np0005464214 python3.9[247578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:13 np0005464214 python3.9[247730]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:13 np0005464214 python3.9[247882]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:14 np0005464214 python3.9[248034]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:15 np0005464214 python3.9[248186]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:16 np0005464214 podman[248311]: 2025-10-01 13:28:16.20811734 +0000 UTC m=+0.097353574 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:28:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:16 np0005464214 podman[248310]: 2025-10-01 13:28:16.237864679 +0000 UTC m=+0.135600151 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct  1 09:28:16 np0005464214 python3.9[248374]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:17 np0005464214 python3.9[248537]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:28:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:28:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:28:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:28:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:28:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:28:17 np0005464214 python3.9[248689]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:18 np0005464214 python3.9[248841]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:19 np0005464214 python3.9[248993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:20 np0005464214 podman[249117]: 2025-10-01 13:28:20.027818956 +0000 UTC m=+0.074671797 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct  1 09:28:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:20 np0005464214 python3.9[249164]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:20 np0005464214 python3.9[249317]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:21 np0005464214 python3.9[249469]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:22 np0005464214 python3.9[249621]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  1 09:28:23 np0005464214 python3.9[249773]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:28:23 np0005464214 systemd[1]: Reloading.
Oct  1 09:28:23 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:28:23 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:28:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:25 np0005464214 python3.9[249960]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:26 np0005464214 python3.9[250113]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:26 np0005464214 python3.9[250268]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:27 np0005464214 python3.9[250421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:28 np0005464214 podman[250546]: 2025-10-01 13:28:28.070790213 +0000 UTC m=+0.094682359 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 09:28:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:28 np0005464214 python3.9[250594]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:29 np0005464214 python3.9[250748]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:29 np0005464214 python3.9[250903]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:30 np0005464214 python3.9[251056]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  1 09:28:32 np0005464214 python3.9[251211]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:32 np0005464214 python3.9[251364]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:33 np0005464214 python3.9[251516]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:34 np0005464214 python3.9[251668]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:34 np0005464214 python3.9[251820]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:35 np0005464214 python3.9[251972]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:36 np0005464214 python3.9[252124]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:36 np0005464214 python3.9[252276]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:37 np0005464214 python3.9[252428]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:38 np0005464214 python3.9[252580]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:39 np0005464214 python3.9[252732]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:40 np0005464214 python3.9[252884]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:46 np0005464214 python3.9[253036]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  1 09:28:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:46 np0005464214 podman[253115]: 2025-10-01 13:28:46.503639862 +0000 UTC m=+0.062336549 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:28:46 np0005464214 podman[253114]: 2025-10-01 13:28:46.535198858 +0000 UTC m=+0.090354403 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:28:46 np0005464214 python3.9[253234]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:28:47
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:28:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:28:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f885fb8e-2889-4555-8a1b-3b5c6ef15cd0 does not exist
Oct  1 09:28:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 769e54c4-3fbd-4576-b0c6-8108455212ed does not exist
Oct  1 09:28:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e4e3553f-373a-4037-9450-28f808e9a779 does not exist
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:28:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:28:48 np0005464214 python3.9[253509]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.042300354 +0000 UTC m=+0.038036551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.269524074 +0000 UTC m=+0.265260261 container create 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:28:49 np0005464214 systemd[1]: Started libpod-conmon-185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf.scope.
Oct  1 09:28:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:28:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:28:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.504365895 +0000 UTC m=+0.500102092 container init 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.516212689 +0000 UTC m=+0.511948856 container start 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:28:49 np0005464214 nifty_zhukovsky[253688]: 167 167
Oct  1 09:28:49 np0005464214 systemd[1]: libpod-185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf.scope: Deactivated successfully.
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.527742923 +0000 UTC m=+0.523479120 container attach 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.52859693 +0000 UTC m=+0.524333137 container died 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:28:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1d73d573269a7ac64ec863e0c77b4908e0e11bef786f0dc0e01ebc2e6a8ad706-merged.mount: Deactivated successfully.
Oct  1 09:28:49 np0005464214 podman[253666]: 2025-10-01 13:28:49.906887837 +0000 UTC m=+0.902624054 container remove 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:28:49 np0005464214 systemd[1]: libpod-conmon-185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf.scope: Deactivated successfully.
Oct  1 09:28:50 np0005464214 podman[253738]: 2025-10-01 13:28:50.182858506 +0000 UTC m=+0.110142427 container create 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  1 09:28:50 np0005464214 podman[253738]: 2025-10-01 13:28:50.112038681 +0000 UTC m=+0.039322582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:28:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:50 np0005464214 systemd[1]: Started libpod-conmon-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope.
Oct  1 09:28:50 np0005464214 podman[253753]: 2025-10-01 13:28:50.295046417 +0000 UTC m=+0.065453517 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 09:28:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:28:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:50 np0005464214 systemd-logind[818]: New session 52 of user zuul.
Oct  1 09:28:50 np0005464214 systemd[1]: Started Session 52 of User zuul.
Oct  1 09:28:50 np0005464214 podman[253738]: 2025-10-01 13:28:50.37724152 +0000 UTC m=+0.304525501 container init 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:28:50 np0005464214 podman[253738]: 2025-10-01 13:28:50.393166623 +0000 UTC m=+0.320450514 container start 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:28:50 np0005464214 podman[253738]: 2025-10-01 13:28:50.416081707 +0000 UTC m=+0.343365688 container attach 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:28:50 np0005464214 systemd[1]: session-52.scope: Deactivated successfully.
Oct  1 09:28:50 np0005464214 systemd-logind[818]: Session 52 logged out. Waiting for processes to exit.
Oct  1 09:28:50 np0005464214 systemd-logind[818]: Removed session 52.
Oct  1 09:28:51 np0005464214 python3.9[253934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:28:51 np0005464214 ecstatic_lichterman[253772]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:28:51 np0005464214 ecstatic_lichterman[253772]: --> relative data size: 1.0
Oct  1 09:28:51 np0005464214 ecstatic_lichterman[253772]: --> All data devices are unavailable
Oct  1 09:28:51 np0005464214 systemd[1]: libpod-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope: Deactivated successfully.
Oct  1 09:28:51 np0005464214 systemd[1]: libpod-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope: Consumed 1.094s CPU time.
Oct  1 09:28:51 np0005464214 podman[253738]: 2025-10-01 13:28:51.555275596 +0000 UTC m=+1.482559477 container died 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:28:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a-merged.mount: Deactivated successfully.
Oct  1 09:28:51 np0005464214 podman[253738]: 2025-10-01 13:28:51.711613169 +0000 UTC m=+1.638897080 container remove 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:28:51 np0005464214 systemd[1]: libpod-conmon-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope: Deactivated successfully.
Oct  1 09:28:51 np0005464214 python3.9[254090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325330.701829-1555-93289603244759/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.549105898 +0000 UTC m=+0.073988697 container create a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:28:52 np0005464214 systemd[1]: Started libpod-conmon-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope.
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.518040987 +0000 UTC m=+0.042923846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:28:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.671499289 +0000 UTC m=+0.196382208 container init a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.68767806 +0000 UTC m=+0.212560839 container start a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  1 09:28:52 np0005464214 quirky_volhard[254354]: 167 167
Oct  1 09:28:52 np0005464214 systemd[1]: libpod-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope: Deactivated successfully.
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.69656203 +0000 UTC m=+0.221444839 container attach a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:28:52 np0005464214 conmon[254354]: conmon a8823a491bb992fd2e9f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope/container/memory.events
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.697759128 +0000 UTC m=+0.222641937 container died a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:28:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c390bbfa38880c4b116c87110128ab80569ebbc619767fdc10ffdcb0523c246a-merged.mount: Deactivated successfully.
Oct  1 09:28:52 np0005464214 podman[254307]: 2025-10-01 13:28:52.804435163 +0000 UTC m=+0.329317932 container remove a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:28:52 np0005464214 systemd[1]: libpod-conmon-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope: Deactivated successfully.
Oct  1 09:28:52 np0005464214 python3.9[254409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:28:53 np0005464214 podman[254423]: 2025-10-01 13:28:53.003582808 +0000 UTC m=+0.057748333 container create 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:28:53 np0005464214 systemd[1]: Started libpod-conmon-8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406.scope.
Oct  1 09:28:53 np0005464214 podman[254423]: 2025-10-01 13:28:52.981182991 +0000 UTC m=+0.035348496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:28:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:28:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:53 np0005464214 podman[254423]: 2025-10-01 13:28:53.142177172 +0000 UTC m=+0.196342707 container init 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:28:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:53 np0005464214 podman[254423]: 2025-10-01 13:28:53.152081544 +0000 UTC m=+0.206247039 container start 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:28:53 np0005464214 podman[254423]: 2025-10-01 13:28:53.180032396 +0000 UTC m=+0.234197931 container attach 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 09:28:53 np0005464214 python3.9[254519]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]: {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:    "0": [
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:        {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "devices": [
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "/dev/loop3"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            ],
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_name": "ceph_lv0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_size": "21470642176",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "name": "ceph_lv0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "tags": {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cluster_name": "ceph",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.crush_device_class": "",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.encrypted": "0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osd_id": "0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.type": "block",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.vdo": "0"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            },
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "type": "block",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "vg_name": "ceph_vg0"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:        }
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:    ],
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:    "1": [
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:        {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "devices": [
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "/dev/loop4"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            ],
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_name": "ceph_lv1",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_size": "21470642176",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "name": "ceph_lv1",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "tags": {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cluster_name": "ceph",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.crush_device_class": "",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.encrypted": "0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osd_id": "1",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.type": "block",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.vdo": "0"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            },
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "type": "block",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "vg_name": "ceph_vg1"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:        }
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:    ],
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:    "2": [
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:        {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "devices": [
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "/dev/loop5"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            ],
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_name": "ceph_lv2",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_size": "21470642176",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "name": "ceph_lv2",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "tags": {
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.cluster_name": "ceph",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.crush_device_class": "",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.encrypted": "0",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osd_id": "2",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.type": "block",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:                "ceph.vdo": "0"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            },
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "type": "block",
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:            "vg_name": "ceph_vg2"
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:        }
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]:    ]
Oct  1 09:28:53 np0005464214 upbeat_jennings[254464]: }
Oct  1 09:28:53 np0005464214 systemd[1]: libpod-8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406.scope: Deactivated successfully.
Oct  1 09:28:53 np0005464214 podman[254423]: 2025-10-01 13:28:53.959743581 +0000 UTC m=+1.013909066 container died 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:28:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571-merged.mount: Deactivated successfully.
Oct  1 09:28:54 np0005464214 python3.9[254674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:28:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:54 np0005464214 podman[254423]: 2025-10-01 13:28:54.347003931 +0000 UTC m=+1.401169456 container remove 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:28:54 np0005464214 systemd[1]: libpod-conmon-8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406.scope: Deactivated successfully.
Oct  1 09:28:54 np0005464214 python3.9[254879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325333.675824-1555-20186330931562/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.161275958 +0000 UTC m=+0.030130253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.302527854 +0000 UTC m=+0.171382059 container create 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:28:55 np0005464214 systemd[1]: Started libpod-conmon-5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c.scope.
Oct  1 09:28:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.49630613 +0000 UTC m=+0.365160355 container init 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.509911549 +0000 UTC m=+0.378765754 container start 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:28:55 np0005464214 crazy_northcutt[255097]: 167 167
Oct  1 09:28:55 np0005464214 systemd[1]: libpod-5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c.scope: Deactivated successfully.
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.531295303 +0000 UTC m=+0.400149618 container attach 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.53244048 +0000 UTC m=+0.401294695 container died 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:28:55 np0005464214 python3.9[255118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:28:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-61fb9e6c9a7e3f5a3c9debf258ed0bfe24414653c95730c2e5be1abbad2daafe-merged.mount: Deactivated successfully.
Oct  1 09:28:55 np0005464214 podman[255023]: 2025-10-01 13:28:55.972654552 +0000 UTC m=+0.841508797 container remove 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:28:56 np0005464214 systemd[1]: libpod-conmon-5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c.scope: Deactivated successfully.
Oct  1 09:28:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:56 np0005464214 podman[255263]: 2025-10-01 13:28:56.27968459 +0000 UTC m=+0.115148535 container create 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:28:56 np0005464214 podman[255263]: 2025-10-01 13:28:56.199672256 +0000 UTC m=+0.035136301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:28:56 np0005464214 python3.9[255257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325335.0316463-1555-270376451294197/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:56 np0005464214 systemd[1]: Started libpod-conmon-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope.
Oct  1 09:28:56 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:28:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:28:56 np0005464214 podman[255263]: 2025-10-01 13:28:56.474968643 +0000 UTC m=+0.310432628 container init 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:28:56 np0005464214 podman[255263]: 2025-10-01 13:28:56.490160142 +0000 UTC m=+0.325624077 container start 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:28:56 np0005464214 podman[255263]: 2025-10-01 13:28:56.531629811 +0000 UTC m=+0.367093846 container attach 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:28:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:28:57 np0005464214 python3.9[255433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]: {
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "osd_id": 0,
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "type": "bluestore"
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:    },
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "osd_id": 2,
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "type": "bluestore"
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:    },
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "osd_id": 1,
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:        "type": "bluestore"
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]:    }
Oct  1 09:28:57 np0005464214 distracted_stonebraker[255279]: }
Oct  1 09:28:57 np0005464214 systemd[1]: libpod-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope: Deactivated successfully.
Oct  1 09:28:57 np0005464214 systemd[1]: libpod-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope: Consumed 1.126s CPU time.
Oct  1 09:28:57 np0005464214 podman[255263]: 2025-10-01 13:28:57.617169517 +0000 UTC m=+1.452633452 container died 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:28:57 np0005464214 systemd[1]: var-lib-containers-storage-overlay-572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5-merged.mount: Deactivated successfully.
Oct  1 09:28:57 np0005464214 python3.9[255582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325336.5853362-1555-25632415394607/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:28:57 np0005464214 podman[255263]: 2025-10-01 13:28:57.982116802 +0000 UTC m=+1.817580777 container remove 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:28:57 np0005464214 systemd[1]: libpod-conmon-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope: Deactivated successfully.
Oct  1 09:28:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:28:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:28:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:28:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:28:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:28:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c1461ee4-db9d-4426-8327-671a76606b6e does not exist
Oct  1 09:28:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a05b3e4-15a1-4f35-8652-22d9598d1eb5 does not exist
Oct  1 09:28:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:28:58 np0005464214 podman[255720]: 2025-10-01 13:28:58.351265231 +0000 UTC m=+0.069874143 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:28:58 np0005464214 python3.9[255815]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:28:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:28:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:28:59 np0005464214 python3.9[255967]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:29:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:00 np0005464214 python3.9[256119]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:01 np0005464214 python3.9[256271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:29:01 np0005464214 python3.9[256394]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759325340.5409873-1648-201163619771849/.source _original_basename=.ko34wqwa follow=False checksum=f38746d134c75429bccd8dc462ab009d24eaf0f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  1 09:29:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:02 np0005464214 python3.9[256546]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:03 np0005464214 python3.9[256698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:29:04 np0005464214 python3.9[256819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325342.8639975-1674-272658090814078/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=d51188376d1ee8ea80c2336e6c661b92261c7db6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:29:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:04 np0005464214 python3.9[256969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  1 09:29:05 np0005464214 python3.9[257090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325344.2349575-1689-211370419721784/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=b10d7cb8eb77f002035ee20deefa0512667b71ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  1 09:29:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:06 np0005464214 python3.9[257242]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  1 09:29:07 np0005464214 python3.9[257394]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 09:29:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:08 np0005464214 python3[257546]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 09:29:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:29:12.295 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:29:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:29:12.296 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:29:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:29:12.296 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:29:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:29:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:29:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:29:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:29:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:29:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:29:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:22 np0005464214 podman[257606]: 2025-10-01 13:29:22.087630269 +0000 UTC m=+4.627943716 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:29:22 np0005464214 podman[257605]: 2025-10-01 13:29:22.13766807 +0000 UTC m=+4.680143637 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:29:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:22 np0005464214 podman[257625]: 2025-10-01 13:29:22.268943544 +0000 UTC m=+1.821464964 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:29:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:28 np0005464214 podman[257559]: 2025-10-01 13:29:28.515059586 +0000 UTC m=+20.016723120 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c
Oct  1 09:29:28 np0005464214 podman[257697]: 2025-10-01 13:29:28.526338615 +0000 UTC m=+0.102660247 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  1 09:29:28 np0005464214 podman[257739]: 2025-10-01 13:29:28.64412166 +0000 UTC m=+0.021756514 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c
Oct  1 09:29:28 np0005464214 podman[257739]: 2025-10-01 13:29:28.822500602 +0000 UTC m=+0.200135486 container create ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 09:29:28 np0005464214 python3[257546]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  1 09:29:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:29 np0005464214 python3.9[257929]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:30 np0005464214 python3.9[258083]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  1 09:29:31 np0005464214 python3.9[258235]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  1 09:29:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:32 np0005464214 python3[258387]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  1 09:29:32 np0005464214 podman[258424]: 2025-10-01 13:29:32.942865306 +0000 UTC m=+0.081768851 container create 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:29:32 np0005464214 podman[258424]: 2025-10-01 13:29:32.902948287 +0000 UTC m=+0.041851902 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c
Oct  1 09:29:32 np0005464214 python3[258387]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c kolla_start
Oct  1 09:29:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:33 np0005464214 python3.9[258613]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:35 np0005464214 python3.9[258767]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:29:36 np0005464214 python3.9[258918]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759325375.1115212-1781-69394282059465/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  1 09:29:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:36 np0005464214 python3.9[258994]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  1 09:29:36 np0005464214 systemd[1]: Reloading.
Oct  1 09:29:36 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:29:36 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:29:38 np0005464214 python3.9[259105]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  1 09:29:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:38 np0005464214 systemd[1]: Reloading.
Oct  1 09:29:38 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 09:29:38 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 09:29:38 np0005464214 systemd[1]: Starting nova_compute container...
Oct  1 09:29:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:29:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:39 np0005464214 podman[259147]: 2025-10-01 13:29:39.687340665 +0000 UTC m=+0.926128991 container init 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 09:29:39 np0005464214 podman[259147]: 2025-10-01 13:29:39.699353767 +0000 UTC m=+0.938142063 container start 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, container_name=nova_compute)
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + sudo -E kolla_set_configs
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Validating config file
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying service configuration files
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Deleting /etc/ceph
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Creating directory /etc/ceph
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Writing out command to execute
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:39 np0005464214 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 09:29:39 np0005464214 nova_compute[259163]: ++ cat /run_command
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + CMD=nova-compute
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + ARGS=
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + sudo kolla_copy_cacerts
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + [[ ! -n '' ]]
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + . kolla_extend_start
Oct  1 09:29:39 np0005464214 nova_compute[259163]: Running command: 'nova-compute'
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + echo 'Running command: '\''nova-compute'\'''
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + umask 0022
Oct  1 09:29:39 np0005464214 nova_compute[259163]: + exec nova-compute
Oct  1 09:29:40 np0005464214 podman[259147]: nova_compute
Oct  1 09:29:40 np0005464214 systemd[1]: Started nova_compute container.
Oct  1 09:29:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:41 np0005464214 python3.9[259328]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:41 np0005464214 python3.9[259478]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:42 np0005464214 python3.9[259628]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  1 09:29:43 np0005464214 python3.9[259783]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  1 09:29:43 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:29:43 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:29:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:45 np0005464214 python3.9[259956]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  1 09:29:45 np0005464214 nova_compute[259163]: 2025-10-01 13:29:45.209 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 09:29:45 np0005464214 nova_compute[259163]: 2025-10-01 13:29:45.211 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 09:29:45 np0005464214 nova_compute[259163]: 2025-10-01 13:29:45.211 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 09:29:45 np0005464214 nova_compute[259163]: 2025-10-01 13:29:45.211 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  1 09:29:45 np0005464214 nova_compute[259163]: 2025-10-01 13:29:45.420 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:29:45 np0005464214 nova_compute[259163]: 2025-10-01 13:29:45.456 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:29:46 np0005464214 systemd[1]: Stopping nova_compute container...
Oct  1 09:29:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:46 np0005464214 nova_compute[259163]: 2025-10-01 13:29:46.389 2 INFO nova.virt.driver [None req-16678575-ea8d-4c09-831b-1eb079adc354 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  1 09:29:46 np0005464214 systemd[1]: libpod-39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788.scope: Deactivated successfully.
Oct  1 09:29:46 np0005464214 systemd[1]: libpod-39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788.scope: Consumed 3.237s CPU time.
Oct  1 09:29:46 np0005464214 podman[259964]: 2025-10-01 13:29:46.50118749 +0000 UTC m=+0.401774597 container died 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 09:29:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788-userdata-shm.mount: Deactivated successfully.
Oct  1 09:29:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay-36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7-merged.mount: Deactivated successfully.
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:29:47
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'vms']
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:29:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:29:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:50 np0005464214 podman[259964]: 2025-10-01 13:29:50.349152162 +0000 UTC m=+4.249739269 container cleanup 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  1 09:29:50 np0005464214 podman[259964]: nova_compute
Oct  1 09:29:50 np0005464214 podman[259994]: nova_compute
Oct  1 09:29:50 np0005464214 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  1 09:29:50 np0005464214 systemd[1]: Stopped nova_compute container.
Oct  1 09:29:50 np0005464214 systemd[1]: Starting nova_compute container...
Oct  1 09:29:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:29:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:50 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:50 np0005464214 podman[260007]: 2025-10-01 13:29:50.567881756 +0000 UTC m=+0.110268426 container init 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute)
Oct  1 09:29:50 np0005464214 podman[260007]: 2025-10-01 13:29:50.578478484 +0000 UTC m=+0.120865104 container start 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute)
Oct  1 09:29:50 np0005464214 podman[260007]: nova_compute
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + sudo -E kolla_set_configs
Oct  1 09:29:50 np0005464214 systemd[1]: Started nova_compute container.
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Validating config file
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying service configuration files
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /etc/ceph
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Creating directory /etc/ceph
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Writing out command to execute
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:50 np0005464214 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  1 09:29:50 np0005464214 nova_compute[260022]: ++ cat /run_command
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + CMD=nova-compute
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + ARGS=
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + sudo kolla_copy_cacerts
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + [[ ! -n '' ]]
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + . kolla_extend_start
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + echo 'Running command: '\''nova-compute'\'''
Oct  1 09:29:50 np0005464214 nova_compute[260022]: Running command: 'nova-compute'
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + umask 0022
Oct  1 09:29:50 np0005464214 nova_compute[260022]: + exec nova-compute
Oct  1 09:29:51 np0005464214 python3.9[260185]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  1 09:29:51 np0005464214 systemd[1]: Started libpod-conmon-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01.scope.
Oct  1 09:29:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:29:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  1 09:29:51 np0005464214 podman[260210]: 2025-10-01 13:29:51.788675807 +0000 UTC m=+0.130117148 container init ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 09:29:51 np0005464214 podman[260210]: 2025-10-01 13:29:51.801620839 +0000 UTC m=+0.143062150 container start ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init)
Oct  1 09:29:51 np0005464214 python3.9[260185]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Applying nova statedir ownership
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  1 09:29:51 np0005464214 nova_compute_init[260232]: INFO:nova_statedir:Nova statedir ownership complete
Oct  1 09:29:51 np0005464214 systemd[1]: libpod-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01.scope: Deactivated successfully.
Oct  1 09:29:51 np0005464214 podman[260233]: 2025-10-01 13:29:51.888280545 +0000 UTC m=+0.047216942 container died ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible)
Oct  1 09:29:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01-userdata-shm.mount: Deactivated successfully.
Oct  1 09:29:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d-merged.mount: Deactivated successfully.
Oct  1 09:29:51 np0005464214 podman[260244]: 2025-10-01 13:29:51.944055548 +0000 UTC m=+0.063341914 container cleanup ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:29:51 np0005464214 systemd[1]: libpod-conmon-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01.scope: Deactivated successfully.
Oct  1 09:29:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:52 np0005464214 systemd[1]: session-50.scope: Deactivated successfully.
Oct  1 09:29:52 np0005464214 systemd[1]: session-50.scope: Consumed 3min 18.255s CPU time.
Oct  1 09:29:52 np0005464214 systemd-logind[818]: Session 50 logged out. Waiting for processes to exit.
Oct  1 09:29:52 np0005464214 systemd-logind[818]: Removed session 50.
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.096 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.096 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.097 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.097 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.317 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.335 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:29:53 np0005464214 nova_compute[260022]: 2025-10-01 13:29:53.863 2 INFO nova.virt.driver [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.010 2 INFO nova.compute.provider_config [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.129 2 DEBUG oslo_concurrency.lockutils [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.130 2 DEBUG oslo_concurrency.lockutils [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.130 2 DEBUG oslo_concurrency.lockutils [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.198 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.219 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.219 2 WARNING oslo_config.cfg [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  1 09:29:54 np0005464214 nova_compute[260022]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  1 09:29:54 np0005464214 nova_compute[260022]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  1 09:29:54 np0005464214 nova_compute[260022]: and ``live_migration_inbound_addr`` respectively.
Oct  1 09:29:54 np0005464214 nova_compute[260022]: ).  Its value may be silently ignored in the future.#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.219 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_secret_uuid        = eb4b6ead-01d1-53b3-a52a-47dcc600555f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.290 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.291 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.331 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.332 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.332 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.333 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  1 09:29:54 np0005464214 systemd[1]: Starting libvirt QEMU daemon...
Oct  1 09:29:54 np0005464214 systemd[1]: Started libvirt QEMU daemon.
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.443 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7a6a39a8b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.446 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7a6a39a8b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.447 2 INFO nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.529 2 WARNING nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  1 09:29:54 np0005464214 nova_compute[260022]: 2025-10-01 13:29:54.529 2 DEBUG nova.virt.libvirt.volume.mount [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  1 09:29:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.640 2 INFO nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host capabilities <capabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <host>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <uuid>adf090e1-fe93-4ff6-a8f5-4224f2f21059</uuid>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <arch>x86_64</arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model>EPYC-Rome-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <vendor>AMD</vendor>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <microcode version='16777317'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <signature family='23' model='49' stepping='0'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='x2apic'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='tsc-deadline'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='osxsave'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='hypervisor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='tsc_adjust'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='spec-ctrl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='stibp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='arch-capabilities'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='cmp_legacy'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='topoext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='virt-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='lbrv'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='tsc-scale'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='vmcb-clean'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='pause-filter'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='pfthreshold'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='svme-addr-chk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='rdctl-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='skip-l1dfl-vmentry'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='mds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature name='pschange-mc-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <pages unit='KiB' size='4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <pages unit='KiB' size='2048'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <pages unit='KiB' size='1048576'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <power_management>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <suspend_mem/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </power_management>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <iommu support='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <migration_features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <live/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <uri_transports>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <uri_transport>tcp</uri_transport>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <uri_transport>rdma</uri_transport>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </uri_transports>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </migration_features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <topology>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <cells num='1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <cell id='0'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          <memory unit='KiB'>7864104</memory>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          <pages unit='KiB' size='2048'>0</pages>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          <distances>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <sibling id='0' value='10'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          </distances>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          <cpus num='8'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:          </cpus>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        </cell>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </cells>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </topology>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <cache>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </cache>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <secmodel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model>selinux</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <doi>0</doi>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </secmodel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <secmodel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model>dac</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <doi>0</doi>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </secmodel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </host>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <guest>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <os_type>hvm</os_type>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <arch name='i686'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <wordsize>32</wordsize>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <domain type='qemu'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <domain type='kvm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <pae/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <nonpae/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <acpi default='on' toggle='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <apic default='on' toggle='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <cpuselection/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <deviceboot/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <disksnapshot default='on' toggle='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <externalSnapshot/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </guest>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <guest>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <os_type>hvm</os_type>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <arch name='x86_64'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <wordsize>64</wordsize>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <domain type='qemu'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <domain type='kvm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <acpi default='on' toggle='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <apic default='on' toggle='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <cpuselection/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <deviceboot/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <disksnapshot default='on' toggle='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <externalSnapshot/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </guest>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 
Oct  1 09:29:55 np0005464214 nova_compute[260022]: </capabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: #033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.647 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.684 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  1 09:29:55 np0005464214 nova_compute[260022]: <domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <domain>kvm</domain>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <arch>i686</arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <vcpu max='4096'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <iothreads supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <os supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='firmware'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <loader supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>rom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pflash</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='readonly'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>yes</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='secure'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </loader>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </os>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-passthrough' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='hostPassthroughMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='maximum' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='maximumMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-model' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <vendor>AMD</vendor>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='x2apic'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='hypervisor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='stibp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='overflow-recov'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='succor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lbrv'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-scale'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='flushbyasid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pause-filter'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pfthreshold'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rdctl-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='mds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='gds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rfds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='disable' name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='custom' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Dhyana-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-128'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-256'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-512'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v6'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v7'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <memoryBacking supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='sourceType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>file</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>anonymous</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>memfd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </memoryBacking>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <disk supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='diskDevice'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>disk</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cdrom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>floppy</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>lun</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>fdc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>sata</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </disk>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <graphics supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vnc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egl-headless</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>dbus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </graphics>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <video supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='modelType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vga</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cirrus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>none</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>bochs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ramfb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </video>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hostdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='mode'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>subsystem</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='startupPolicy'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>mandatory</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>requisite</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>optional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='subsysType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pci</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='capsType'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='pciBackend'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hostdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <rng supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>random</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </rng>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <filesystem supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='driverType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>path</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>handle</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtiofs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </filesystem>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <tpm supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-tis</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-crb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emulator</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>external</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendVersion'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>2.0</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </tpm>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <redirdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </redirdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <channel supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pty</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>unix</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </channel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <crypto supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>qemu</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </crypto>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <interface supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>passt</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </interface>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <panic supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>isa</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>hyperv</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </panic>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <gic supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <vmcoreinfo supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <genid supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backingStoreInput supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backup supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <async-teardown supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <ps2 supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sev supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sgx supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hyperv supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='features'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>relaxed</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vapic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>spinlocks</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vpindex</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>runtime</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>synic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>stimer</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reset</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vendor_id</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>frequencies</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reenlightenment</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tlbflush</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ipi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>avic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emsr_bitmap</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>xmm_input</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hyperv>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <launchSecurity supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: </domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.690 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  1 09:29:55 np0005464214 nova_compute[260022]: <domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <domain>kvm</domain>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <arch>i686</arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <vcpu max='240'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <iothreads supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <os supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='firmware'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <loader supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>rom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pflash</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='readonly'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>yes</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='secure'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </loader>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </os>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-passthrough' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='hostPassthroughMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='maximum' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='maximumMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-model' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <vendor>AMD</vendor>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='x2apic'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='hypervisor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='stibp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='overflow-recov'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='succor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lbrv'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-scale'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='flushbyasid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pause-filter'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pfthreshold'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rdctl-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='mds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='gds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rfds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='disable' name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='custom' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Dhyana-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-128'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-256'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-512'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v6'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v7'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <memoryBacking supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='sourceType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>file</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>anonymous</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>memfd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </memoryBacking>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <disk supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='diskDevice'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>disk</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cdrom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>floppy</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>lun</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ide</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>fdc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>sata</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </disk>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <graphics supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vnc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egl-headless</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>dbus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </graphics>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <video supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='modelType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vga</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cirrus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>none</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>bochs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ramfb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </video>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hostdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='mode'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>subsystem</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='startupPolicy'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>mandatory</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>requisite</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>optional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='subsysType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pci</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='capsType'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='pciBackend'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hostdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <rng supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>random</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </rng>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <filesystem supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='driverType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>path</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>handle</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtiofs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </filesystem>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <tpm supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-tis</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-crb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emulator</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>external</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendVersion'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>2.0</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </tpm>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <redirdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </redirdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <channel supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pty</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>unix</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </channel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <crypto supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>qemu</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </crypto>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <interface supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>passt</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </interface>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <panic supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>isa</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>hyperv</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </panic>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <gic supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <vmcoreinfo supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <genid supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backingStoreInput supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backup supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <async-teardown supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <ps2 supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sev supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sgx supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hyperv supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='features'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>relaxed</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vapic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>spinlocks</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vpindex</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>runtime</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>synic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>stimer</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reset</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vendor_id</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>frequencies</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reenlightenment</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tlbflush</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ipi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>avic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emsr_bitmap</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>xmm_input</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hyperv>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <launchSecurity supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: </domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.717 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.723 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  1 09:29:55 np0005464214 nova_compute[260022]: <domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <domain>kvm</domain>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <arch>x86_64</arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <vcpu max='4096'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <iothreads supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <os supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='firmware'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>efi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <loader supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>rom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pflash</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='readonly'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>yes</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='secure'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>yes</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </loader>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </os>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-passthrough' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='hostPassthroughMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='maximum' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='maximumMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-model' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <vendor>AMD</vendor>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='x2apic'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='hypervisor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='stibp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='overflow-recov'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='succor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lbrv'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-scale'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='flushbyasid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pause-filter'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pfthreshold'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rdctl-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='mds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='gds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rfds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='disable' name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='custom' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Dhyana-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-128'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-256'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-512'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v6'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v7'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <memoryBacking supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='sourceType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>file</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>anonymous</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>memfd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </memoryBacking>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <disk supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='diskDevice'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>disk</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cdrom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>floppy</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>lun</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>fdc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>sata</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </disk>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <graphics supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vnc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egl-headless</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>dbus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </graphics>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <video supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='modelType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vga</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cirrus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>none</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>bochs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ramfb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </video>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hostdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='mode'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>subsystem</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='startupPolicy'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>mandatory</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>requisite</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>optional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='subsysType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pci</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='capsType'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='pciBackend'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hostdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <rng supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>random</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </rng>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <filesystem supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='driverType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>path</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>handle</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtiofs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </filesystem>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <tpm supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-tis</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-crb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emulator</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>external</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendVersion'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>2.0</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </tpm>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <redirdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </redirdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <channel supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pty</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>unix</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </channel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <crypto supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>qemu</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </crypto>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <interface supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>passt</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </interface>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <panic supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>isa</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>hyperv</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </panic>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <gic supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <vmcoreinfo supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <genid supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backingStoreInput supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backup supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <async-teardown supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <ps2 supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sev supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sgx supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hyperv supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='features'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>relaxed</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vapic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>spinlocks</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vpindex</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>runtime</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>synic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>stimer</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reset</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vendor_id</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>frequencies</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reenlightenment</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tlbflush</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ipi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>avic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emsr_bitmap</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>xmm_input</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hyperv>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <launchSecurity supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: </domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.778 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  1 09:29:55 np0005464214 nova_compute[260022]: <domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <path>/usr/libexec/qemu-kvm</path>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <domain>kvm</domain>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <arch>x86_64</arch>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <vcpu max='240'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <iothreads supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <os supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='firmware'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <loader supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>rom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pflash</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='readonly'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>yes</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='secure'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>no</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </loader>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </os>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-passthrough' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='hostPassthroughMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='maximum' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='maximumMigratable'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>on</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>off</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='host-model' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <vendor>AMD</vendor>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='x2apic'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-deadline'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='hypervisor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc_adjust'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='spec-ctrl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='stibp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='arch-capabilities'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='cmp_legacy'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='overflow-recov'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='succor'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='amd-ssbd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='virt-ssbd'/>
Oct  1 09:29:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:29:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5750 writes, 24K keys, 5750 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5750 writes, 952 syncs, 6.04 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lbrv'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='tsc-scale'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='vmcb-clean'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='flushbyasid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pause-filter'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pfthreshold'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='svme-addr-chk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rdctl-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='mds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='pschange-mc-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='gds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='require' name='rfds-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <feature policy='disable' name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <mode name='custom' supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Broadwell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cascadelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Cooperlake-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Denverton-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Dhyana-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Genoa-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='auto-ibrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Milan-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amd-psfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='no-nested-data-bp'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='null-sel-clr-base'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='stibp-always-on'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-Rome-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='EPYC-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='GraniteRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-128'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-256'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx10-512'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='prefetchiti'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Haswell-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-noTSX'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v6'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Icelake-Server-v7'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='IvyBridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='KnightsMill-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4fmaps'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-4vnniw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512er'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512pf'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G4-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Opteron_G5-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fma4'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tbm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xop'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SapphireRapids-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='amx-tile'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-bf16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-fp16'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512-vpopcntdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bitalg'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vbmi2'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrc'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fzrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='la57'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='taa-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='tsx-ldtrk'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xfd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='SierraForest-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ifma'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-ne-convert'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx-vnni-int8'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='bus-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cmpccxadd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fbsdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='fsrs'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ibrs-all'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mcdt-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pbrsb-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='psdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='sbdr-ssdp-no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='serialize'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vaes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='vpclmulqdq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Client-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='hle'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='rtm'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Skylake-Server-v5'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512bw'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512cd'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512dq'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512f'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='avx512vl'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='invpcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pcid'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='pku'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='mpx'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v2'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v3'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='core-capability'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='split-lock-detect'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='Snowridge-v4'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='cldemote'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='erms'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='gfni'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdir64b'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='movdiri'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='xsaves'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='athlon-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='core2duo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='coreduo-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='n270-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='ss'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <blockers model='phenom-v1'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnow'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <feature name='3dnowext'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </blockers>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </mode>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </cpu>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <memoryBacking supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <enum name='sourceType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>file</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>anonymous</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <value>memfd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </memoryBacking>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <disk supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='diskDevice'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>disk</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cdrom</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>floppy</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>lun</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ide</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>fdc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>sata</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </disk>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <graphics supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vnc</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egl-headless</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>dbus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </graphics>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <video supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='modelType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vga</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>cirrus</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>none</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>bochs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ramfb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </video>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hostdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='mode'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>subsystem</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='startupPolicy'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>mandatory</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>requisite</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>optional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='subsysType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pci</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>scsi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='capsType'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='pciBackend'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hostdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <rng supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtio-non-transitional</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>random</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>egd</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </rng>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <filesystem supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='driverType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>path</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>handle</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>virtiofs</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </filesystem>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <tpm supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-tis</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tpm-crb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emulator</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>external</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendVersion'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>2.0</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </tpm>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <redirdev supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='bus'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>usb</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </redirdev>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <channel supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>pty</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>unix</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </channel>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <crypto supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='type'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>qemu</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendModel'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>builtin</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </crypto>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <interface supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='backendType'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>default</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>passt</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </interface>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <panic supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='model'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>isa</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>hyperv</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </panic>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </devices>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  <features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <gic supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <vmcoreinfo supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <genid supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backingStoreInput supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <backup supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <async-teardown supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <ps2 supported='yes'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sev supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <sgx supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <hyperv supported='yes'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      <enum name='features'>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>relaxed</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vapic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>spinlocks</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vpindex</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>runtime</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>synic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>stimer</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reset</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>vendor_id</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>frequencies</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>reenlightenment</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>tlbflush</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>ipi</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>avic</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>emsr_bitmap</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:        <value>xmm_input</value>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:      </enum>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    </hyperv>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:    <launchSecurity supported='no'/>
Oct  1 09:29:55 np0005464214 nova_compute[260022]:  </features>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: </domainCapabilities>
Oct  1 09:29:55 np0005464214 nova_compute[260022]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.836 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.837 2 INFO nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Secure Boot support detected#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.839 2 INFO nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.839 2 INFO nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.848 2 DEBUG nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  1 09:29:55 np0005464214 nova_compute[260022]: 2025-10-01 13:29:55.956 2 INFO nova.virt.node [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Determined node identity c1b9017d-7e6f-44ea-9ee2-bc19313d736f from /var/lib/nova/compute_id#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.094 2 WARNING nova.compute.manager [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Compute nodes ['c1b9017d-7e6f-44ea-9ee2-bc19313d736f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  1 09:29:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.544 2 INFO nova.compute.manager [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.956 2 WARNING nova.compute.manager [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:29:56 np0005464214 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:29:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:29:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:29:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814815196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:29:57 np0005464214 nova_compute[260022]: 2025-10-01 13:29:57.411 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:29:57 np0005464214 systemd[1]: Starting libvirt nodedev daemon...
Oct  1 09:29:57 np0005464214 systemd[1]: Started libvirt nodedev daemon.
Oct  1 09:29:57 np0005464214 nova_compute[260022]: 2025-10-01 13:29:57.935 2 WARNING nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:29:57 np0005464214 nova_compute[260022]: 2025-10-01 13:29:57.936 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5208MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:29:57 np0005464214 nova_compute[260022]: 2025-10-01 13:29:57.936 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:29:57 np0005464214 nova_compute[260022]: 2025-10-01 13:29:57.937 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:29:58 np0005464214 nova_compute[260022]: 2025-10-01 13:29:58.047 2 WARNING nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:c1b9017d-7e6f-44ea-9ee2-bc19313d736f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host c1b9017d-7e6f-44ea-9ee2-bc19313d736f could not be found.#033[00m
Oct  1 09:29:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:29:58 np0005464214 nova_compute[260022]: 2025-10-01 13:29:58.291 2 INFO nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: c1b9017d-7e6f-44ea-9ee2-bc19313d736f#033[00m
Oct  1 09:29:58 np0005464214 podman[260437]: 2025-10-01 13:29:58.523620633 +0000 UTC m=+0.075795811 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct  1 09:29:58 np0005464214 podman[260438]: 2025-10-01 13:29:58.524325325 +0000 UTC m=+0.074017574 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:29:58 np0005464214 podman[260436]: 2025-10-01 13:29:58.560951519 +0000 UTC m=+0.114814592 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:29:58 np0005464214 podman[260519]: 2025-10-01 13:29:58.652543593 +0000 UTC m=+0.083352933 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:29:58 np0005464214 nova_compute[260022]: 2025-10-01 13:29:58.754 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:29:58 np0005464214 nova_compute[260022]: 2025-10-01 13:29:58.755 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:29:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fff1b8d0-0beb-43c4-830a-106289ae127e does not exist
Oct  1 09:29:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 20bc2ca1-9efb-4e9d-b8db-16b22bb3d8f7 does not exist
Oct  1 09:29:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7d763adf-2e49-42d2-b485-4a38e14b8721 does not exist
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:29:59 np0005464214 nova_compute[260022]: 2025-10-01 13:29:59.681 2 INFO nova.scheduler.client.report [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] [req-b3b32f97-81e7-470b-8239-b0299d55b12e] Created resource provider record via placement API for resource provider with UUID c1b9017d-7e6f-44ea-9ee2-bc19313d736f and name compute-0.ctlplane.example.com.#033[00m
Oct  1 09:29:59 np0005464214 podman[260760]: 2025-10-01 13:29:59.810587777 +0000 UTC m=+0.043147463 container create 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:29:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:29:59 np0005464214 systemd[1]: Started libpod-conmon-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope.
Oct  1 09:29:59 np0005464214 podman[260760]: 2025-10-01 13:29:59.790407935 +0000 UTC m=+0.022967651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:29:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:29:59 np0005464214 podman[260760]: 2025-10-01 13:29:59.954356749 +0000 UTC m=+0.186916525 container init 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:29:59 np0005464214 podman[260760]: 2025-10-01 13:29:59.964892814 +0000 UTC m=+0.197452540 container start 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:29:59 np0005464214 keen_haslett[260776]: 167 167
Oct  1 09:29:59 np0005464214 systemd[1]: libpod-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope: Deactivated successfully.
Oct  1 09:29:59 np0005464214 conmon[260776]: conmon 315e4987f51d9f005a80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope/container/memory.events
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.073 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:30:00 np0005464214 podman[260760]: 2025-10-01 13:30:00.211589619 +0000 UTC m=+0.444149345 container attach 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:30:00 np0005464214 podman[260760]: 2025-10-01 13:30:00.212999693 +0000 UTC m=+0.445559389 container died 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:30:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:30:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:30:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:30:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:30:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441398135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.606 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.613 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  1 09:30:00 np0005464214 nova_compute[260022]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.614 2 INFO nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.615 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.615 2 DEBUG nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  1 09:30:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ecb0c2544377ea915647b48157c7aefaf4c1f7ad9664a98caddfdd2d0a779015-merged.mount: Deactivated successfully.
Oct  1 09:30:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:30:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6974 writes, 28K keys, 6974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6974 writes, 1320 syncs, 5.28 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.861 2 DEBUG nova.scheduler.client.report [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updated inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.862 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  1 09:30:00 np0005464214 nova_compute[260022]: 2025-10-01 13:30:00.862 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:30:01 np0005464214 podman[260760]: 2025-10-01 13:30:01.149333339 +0000 UTC m=+1.381893065 container remove 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:30:01 np0005464214 systemd[1]: libpod-conmon-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope: Deactivated successfully.
Oct  1 09:30:01 np0005464214 nova_compute[260022]: 2025-10-01 13:30:01.177 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  1 09:30:01 np0005464214 nova_compute[260022]: 2025-10-01 13:30:01.243 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:30:01 np0005464214 nova_compute[260022]: 2025-10-01 13:30:01.251 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:30:01 np0005464214 nova_compute[260022]: 2025-10-01 13:30:01.251 2 DEBUG nova.service [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  1 09:30:01 np0005464214 podman[260823]: 2025-10-01 13:30:01.36823378 +0000 UTC m=+0.024432849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:30:01 np0005464214 nova_compute[260022]: 2025-10-01 13:30:01.726 2 DEBUG nova.service [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  1 09:30:01 np0005464214 nova_compute[260022]: 2025-10-01 13:30:01.727 2 DEBUG nova.servicegroup.drivers.db [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  1 09:30:01 np0005464214 podman[260823]: 2025-10-01 13:30:01.841085516 +0000 UTC m=+0.497284615 container create 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:30:02 np0005464214 systemd[1]: Started libpod-conmon-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope.
Oct  1 09:30:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:30:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:02 np0005464214 podman[260823]: 2025-10-01 13:30:02.373976631 +0000 UTC m=+1.030175760 container init 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:30:02 np0005464214 podman[260823]: 2025-10-01 13:30:02.387535973 +0000 UTC m=+1.043735022 container start 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:30:02 np0005464214 podman[260823]: 2025-10-01 13:30:02.585302102 +0000 UTC m=+1.241501251 container attach 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:30:03 np0005464214 objective_wright[260839]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:30:03 np0005464214 objective_wright[260839]: --> relative data size: 1.0
Oct  1 09:30:03 np0005464214 objective_wright[260839]: --> All data devices are unavailable
Oct  1 09:30:03 np0005464214 systemd[1]: libpod-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope: Deactivated successfully.
Oct  1 09:30:03 np0005464214 systemd[1]: libpod-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope: Consumed 1.233s CPU time.
Oct  1 09:30:03 np0005464214 podman[260823]: 2025-10-01 13:30:03.71608561 +0000 UTC m=+2.372284689 container died 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 09:30:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703-merged.mount: Deactivated successfully.
Oct  1 09:30:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:04 np0005464214 podman[260823]: 2025-10-01 13:30:04.395443522 +0000 UTC m=+3.051642611 container remove 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:30:04 np0005464214 systemd[1]: libpod-conmon-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope: Deactivated successfully.
Oct  1 09:30:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:05 np0005464214 podman[261024]: 2025-10-01 13:30:05.214322532 +0000 UTC m=+0.025293815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:30:05 np0005464214 podman[261024]: 2025-10-01 13:30:05.473228615 +0000 UTC m=+0.284199928 container create 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:30:05 np0005464214 systemd[1]: Started libpod-conmon-97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411.scope.
Oct  1 09:30:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:30:05 np0005464214 podman[261024]: 2025-10-01 13:30:05.803343413 +0000 UTC m=+0.614314756 container init 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:30:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:30:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5635 writes, 24K keys, 5635 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5635 writes, 875 syncs, 6.44 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct  1 09:30:05 np0005464214 podman[261024]: 2025-10-01 13:30:05.816394108 +0000 UTC m=+0.627365371 container start 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:30:05 np0005464214 pensive_jepsen[261040]: 167 167
Oct  1 09:30:05 np0005464214 systemd[1]: libpod-97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411.scope: Deactivated successfully.
Oct  1 09:30:06 np0005464214 podman[261024]: 2025-10-01 13:30:06.078691089 +0000 UTC m=+0.889662452 container attach 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:30:06 np0005464214 podman[261024]: 2025-10-01 13:30:06.079765123 +0000 UTC m=+0.890736426 container died 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:30:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3bb6fc4c28f3292c5c5e5de99fa96329822a5e3f3162a76c29ae4989b83c144c-merged.mount: Deactivated successfully.
Oct  1 09:30:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:06 np0005464214 podman[261024]: 2025-10-01 13:30:06.630425783 +0000 UTC m=+1.441397076 container remove 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:30:06 np0005464214 systemd[1]: libpod-conmon-97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411.scope: Deactivated successfully.
Oct  1 09:30:06 np0005464214 podman[261067]: 2025-10-01 13:30:06.828316066 +0000 UTC m=+0.035795189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:30:07 np0005464214 podman[261067]: 2025-10-01 13:30:07.154978104 +0000 UTC m=+0.362457267 container create 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:30:07 np0005464214 systemd[1]: Started libpod-conmon-8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006.scope.
Oct  1 09:30:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:30:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:07 np0005464214 podman[261067]: 2025-10-01 13:30:07.462056809 +0000 UTC m=+0.669536012 container init 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:30:07 np0005464214 podman[261067]: 2025-10-01 13:30:07.469615529 +0000 UTC m=+0.677094642 container start 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:30:07 np0005464214 podman[261067]: 2025-10-01 13:30:07.542667292 +0000 UTC m=+0.750146445 container attach 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:30:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]: {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:    "0": [
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:        {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "devices": [
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "/dev/loop3"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            ],
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_name": "ceph_lv0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_size": "21470642176",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "name": "ceph_lv0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "tags": {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cluster_name": "ceph",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.crush_device_class": "",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.encrypted": "0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osd_id": "0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.type": "block",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.vdo": "0"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            },
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "type": "block",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "vg_name": "ceph_vg0"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:        }
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:    ],
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:    "1": [
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:        {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "devices": [
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "/dev/loop4"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            ],
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_name": "ceph_lv1",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_size": "21470642176",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "name": "ceph_lv1",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "tags": {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cluster_name": "ceph",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.crush_device_class": "",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.encrypted": "0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osd_id": "1",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.type": "block",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.vdo": "0"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            },
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "type": "block",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "vg_name": "ceph_vg1"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:        }
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:    ],
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:    "2": [
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:        {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "devices": [
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "/dev/loop5"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            ],
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_name": "ceph_lv2",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_size": "21470642176",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "name": "ceph_lv2",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "tags": {
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.cluster_name": "ceph",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.crush_device_class": "",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.encrypted": "0",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osd_id": "2",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.type": "block",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:                "ceph.vdo": "0"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            },
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "type": "block",
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:            "vg_name": "ceph_vg2"
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:        }
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]:    ]
Oct  1 09:30:08 np0005464214 gallant_yalow[261083]: }
Oct  1 09:30:08 np0005464214 systemd[1]: libpod-8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006.scope: Deactivated successfully.
Oct  1 09:30:08 np0005464214 podman[261067]: 2025-10-01 13:30:08.27121245 +0000 UTC m=+1.478691623 container died 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:30:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c-merged.mount: Deactivated successfully.
Oct  1 09:30:09 np0005464214 podman[261067]: 2025-10-01 13:30:09.332545779 +0000 UTC m=+2.540024932 container remove 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:30:09 np0005464214 systemd[1]: libpod-conmon-8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006.scope: Deactivated successfully.
Oct  1 09:30:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:10 np0005464214 podman[261245]: 2025-10-01 13:30:10.27003733 +0000 UTC m=+0.127936339 container create 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:30:10 np0005464214 podman[261245]: 2025-10-01 13:30:10.18008935 +0000 UTC m=+0.037988399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:30:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:10 np0005464214 systemd[1]: Started libpod-conmon-7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b.scope.
Oct  1 09:30:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:30:10 np0005464214 podman[261245]: 2025-10-01 13:30:10.585003196 +0000 UTC m=+0.442902185 container init 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 09:30:10 np0005464214 podman[261245]: 2025-10-01 13:30:10.597794643 +0000 UTC m=+0.455693652 container start 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:30:10 np0005464214 elegant_shockley[261262]: 167 167
Oct  1 09:30:10 np0005464214 systemd[1]: libpod-7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b.scope: Deactivated successfully.
Oct  1 09:30:10 np0005464214 podman[261245]: 2025-10-01 13:30:10.709912118 +0000 UTC m=+0.567811107 container attach 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:30:10 np0005464214 podman[261245]: 2025-10-01 13:30:10.710616421 +0000 UTC m=+0.568515390 container died 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:30:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2c3a2e52cc804523dc2aea6bb4c0fedb5d023bd632b718ea15a7655407f2d0c0-merged.mount: Deactivated successfully.
Oct  1 09:30:12 np0005464214 podman[261245]: 2025-10-01 13:30:12.018313253 +0000 UTC m=+1.876212222 container remove 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:30:12 np0005464214 systemd[1]: libpod-conmon-7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b.scope: Deactivated successfully.
Oct  1 09:30:12 np0005464214 podman[261286]: 2025-10-01 13:30:12.252802702 +0000 UTC m=+0.108558484 container create f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:30:12 np0005464214 podman[261286]: 2025-10-01 13:30:12.166419945 +0000 UTC m=+0.022175697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:30:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:30:12.297 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:30:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:30:12.298 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:30:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:30:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:30:12 np0005464214 systemd[1]: Started libpod-conmon-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope.
Oct  1 09:30:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:30:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:30:12 np0005464214 podman[261286]: 2025-10-01 13:30:12.442878886 +0000 UTC m=+0.298634688 container init f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:30:12 np0005464214 podman[261286]: 2025-10-01 13:30:12.455414364 +0000 UTC m=+0.311170136 container start f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:30:12 np0005464214 podman[261286]: 2025-10-01 13:30:12.686447991 +0000 UTC m=+0.542203823 container attach f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]: {
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "osd_id": 0,
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "type": "bluestore"
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:    },
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "osd_id": 2,
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "type": "bluestore"
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:    },
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "osd_id": 1,
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:        "type": "bluestore"
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]:    }
Oct  1 09:30:13 np0005464214 peaceful_thompson[261302]: }
Oct  1 09:30:13 np0005464214 systemd[1]: libpod-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope: Deactivated successfully.
Oct  1 09:30:13 np0005464214 systemd[1]: libpod-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope: Consumed 1.164s CPU time.
Oct  1 09:30:13 np0005464214 podman[261286]: 2025-10-01 13:30:13.638521696 +0000 UTC m=+1.494277438 container died f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:30:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e-merged.mount: Deactivated successfully.
Oct  1 09:30:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:14 np0005464214 podman[261286]: 2025-10-01 13:30:14.445057663 +0000 UTC m=+2.300813435 container remove f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:30:14 np0005464214 systemd[1]: libpod-conmon-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope: Deactivated successfully.
Oct  1 09:30:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:30:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:30:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:30:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:30:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 665e89a6-ec32-47e1-86f9-163e304ad0bd does not exist
Oct  1 09:30:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c2f42cfa-918d-46e1-a090-e7173f65b02c does not exist
Oct  1 09:30:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:30:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:30:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:30:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:30:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:30:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:30:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:30:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:30:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2679179799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2679179799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1290104316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:30:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1290104316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:30:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:30:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1677239573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:30:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:30:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1677239573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:30:29 np0005464214 podman[261402]: 2025-10-01 13:30:29.561258444 +0000 UTC m=+0.087771811 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:30:29 np0005464214 podman[261400]: 2025-10-01 13:30:29.57277216 +0000 UTC m=+0.099871586 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:30:29 np0005464214 podman[261401]: 2025-10-01 13:30:29.574823025 +0000 UTC m=+0.098442351 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 09:30:29 np0005464214 podman[261399]: 2025-10-01 13:30:29.618809055 +0000 UTC m=+0.146697667 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  1 09:30:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:37 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct  1 09:30:37 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:37.643130) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:30:37 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct  1 09:30:37 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325437643176, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1436, "num_deletes": 251, "total_data_size": 2270403, "memory_usage": 2301576, "flush_reason": "Manual Compaction"}
Oct  1 09:30:37 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325438059105, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2238122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14991, "largest_seqno": 16426, "table_properties": {"data_size": 2231409, "index_size": 3848, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13793, "raw_average_key_size": 19, "raw_value_size": 2217956, "raw_average_value_size": 3163, "num_data_blocks": 176, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325278, "oldest_key_time": 1759325278, "file_creation_time": 1759325437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 416079 microseconds, and 9899 cpu microseconds.
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.059201) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2238122 bytes OK
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.059231) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.218864) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.218933) EVENT_LOG_v1 {"time_micros": 1759325438218916, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.218968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2264060, prev total WAL file size 2264060, number of live WAL files 2.
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.220639) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2185KB)], [35(7275KB)]
Oct  1 09:30:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325438220759, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9688056, "oldest_snapshot_seqno": -1}
Oct  1 09:30:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4035 keys, 7910330 bytes, temperature: kUnknown
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325439110629, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7910330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7880747, "index_size": 18401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98539, "raw_average_key_size": 24, "raw_value_size": 7805156, "raw_average_value_size": 1934, "num_data_blocks": 778, "num_entries": 4035, "num_filter_entries": 4035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.111142) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7910330 bytes
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.335696) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.9 rd, 8.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 4549, records dropped: 514 output_compression: NoCompression
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.335790) EVENT_LOG_v1 {"time_micros": 1759325439335768, "job": 16, "event": "compaction_finished", "compaction_time_micros": 889981, "compaction_time_cpu_micros": 23331, "output_level": 6, "num_output_files": 1, "total_output_size": 7910330, "num_input_records": 4549, "num_output_records": 4035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325439336713, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325439339678, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.220429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:30:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:30:47
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms']
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:30:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:30:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:49 np0005464214 nova_compute[260022]: 2025-10-01 13:30:49.729 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:49 np0005464214 nova_compute[260022]: 2025-10-01 13:30:49.756 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.405 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.405 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.406 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.407 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.407 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.407 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.408 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.408 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.409 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.544 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.546 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.547 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.547 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:30:53 np0005464214 nova_compute[260022]: 2025-10-01 13:30:53.548 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:30:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:30:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384273131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.152 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:30:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.371 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.372 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5194MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.373 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.373 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.787 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.788 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:30:54 np0005464214 nova_compute[260022]: 2025-10-01 13:30:54.805 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:30:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:30:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:30:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380029336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:30:55 np0005464214 nova_compute[260022]: 2025-10-01 13:30:55.275 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:30:55 np0005464214 nova_compute[260022]: 2025-10-01 13:30:55.281 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:30:55 np0005464214 nova_compute[260022]: 2025-10-01 13:30:55.318 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:30:55 np0005464214 nova_compute[260022]: 2025-10-01 13:30:55.319 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:30:55 np0005464214 nova_compute[260022]: 2025-10-01 13:30:55.320 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:30:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:30:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:30:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:30:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:31:00 np0005464214 podman[261545]: 2025-10-01 13:31:00.555389088 +0000 UTC m=+0.088819735 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 09:31:00 np0005464214 podman[261538]: 2025-10-01 13:31:00.559442338 +0000 UTC m=+0.098811954 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd)
Oct  1 09:31:00 np0005464214 podman[261539]: 2025-10-01 13:31:00.5680133 +0000 UTC m=+0.105987332 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923)
Oct  1 09:31:00 np0005464214 podman[261537]: 2025-10-01 13:31:00.59916423 +0000 UTC m=+0.150349602 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:31:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:31:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:31:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:31:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:31:12.298 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:31:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:31:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:31:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:31:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:31:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Oct  1 09:31:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct  1 09:31:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:31:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct  1 09:31:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:31:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:31:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:31:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:31:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:31:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:31:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:31:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:17 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 84578743-07f2-47fb-a264-393922825af1 does not exist
Oct  1 09:31:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f8ac1b6e-7373-40ef-8b31-afeda72be9aa does not exist
Oct  1 09:31:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 21840c49-bc5e-4f85-b761-42d3e5341305 does not exist
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:31:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:31:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct  1 09:31:18 np0005464214 podman[262006]: 2025-10-01 13:31:18.899026868 +0000 UTC m=+0.026956860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:31:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Oct  1 09:31:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:31:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:31:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:31:20 np0005464214 podman[262006]: 2025-10-01 13:31:20.543241303 +0000 UTC m=+1.671171285 container create 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:31:21 np0005464214 systemd[1]: Started libpod-conmon-42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7.scope.
Oct  1 09:31:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:31:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Oct  1 09:31:22 np0005464214 podman[262006]: 2025-10-01 13:31:22.48761986 +0000 UTC m=+3.615549912 container init 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:31:22 np0005464214 podman[262006]: 2025-10-01 13:31:22.500262742 +0000 UTC m=+3.628192724 container start 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:31:22 np0005464214 affectionate_kare[262022]: 167 167
Oct  1 09:31:22 np0005464214 systemd[1]: libpod-42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7.scope: Deactivated successfully.
Oct  1 09:31:23 np0005464214 podman[262006]: 2025-10-01 13:31:23.821092926 +0000 UTC m=+4.949022918 container attach 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:31:23 np0005464214 podman[262006]: 2025-10-01 13:31:23.822543912 +0000 UTC m=+4.950473944 container died 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:31:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Oct  1 09:31:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-01b6f26c5f2bafb2299ce890fcb245dd6903b67da8c7d767582823a35c987613-merged.mount: Deactivated successfully.
Oct  1 09:31:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:26 np0005464214 podman[262006]: 2025-10-01 13:31:26.881019996 +0000 UTC m=+8.008949978 container remove 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:31:26 np0005464214 systemd[1]: libpod-conmon-42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7.scope: Deactivated successfully.
Oct  1 09:31:27 np0005464214 podman[262045]: 2025-10-01 13:31:27.079567725 +0000 UTC m=+0.024495580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:31:27 np0005464214 podman[262045]: 2025-10-01 13:31:27.253392447 +0000 UTC m=+0.198320302 container create f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:31:27 np0005464214 systemd[1]: Started libpod-conmon-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope.
Oct  1 09:31:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:31:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:27 np0005464214 podman[262045]: 2025-10-01 13:31:27.96556984 +0000 UTC m=+0.910497795 container init f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:31:27 np0005464214 podman[262045]: 2025-10-01 13:31:27.977648315 +0000 UTC m=+0.922576210 container start f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:31:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Oct  1 09:31:28 np0005464214 podman[262045]: 2025-10-01 13:31:28.75551852 +0000 UTC m=+1.700446465 container attach f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 09:31:30 np0005464214 cool_chaum[262062]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:31:30 np0005464214 cool_chaum[262062]: --> relative data size: 1.0
Oct  1 09:31:30 np0005464214 cool_chaum[262062]: --> All data devices are unavailable
Oct  1 09:31:30 np0005464214 systemd[1]: libpod-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope: Deactivated successfully.
Oct  1 09:31:30 np0005464214 systemd[1]: libpod-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope: Consumed 1.356s CPU time.
Oct  1 09:31:30 np0005464214 podman[262045]: 2025-10-01 13:31:30.100174452 +0000 UTC m=+3.045102337 container died f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:31:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:31:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7-merged.mount: Deactivated successfully.
Oct  1 09:31:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:31:32 np0005464214 podman[262045]: 2025-10-01 13:31:32.506083097 +0000 UTC m=+5.451010992 container remove f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:31:32 np0005464214 systemd[1]: libpod-conmon-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope: Deactivated successfully.
Oct  1 09:31:32 np0005464214 podman[262105]: 2025-10-01 13:31:32.606989368 +0000 UTC m=+1.651324842 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:31:32 np0005464214 podman[262106]: 2025-10-01 13:31:32.626879771 +0000 UTC m=+1.669358525 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 09:31:32 np0005464214 podman[262107]: 2025-10-01 13:31:32.630505387 +0000 UTC m=+1.666060891 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  1 09:31:32 np0005464214 podman[262104]: 2025-10-01 13:31:32.647922802 +0000 UTC m=+1.691715758 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:31:33 np0005464214 podman[262322]: 2025-10-01 13:31:33.421266733 +0000 UTC m=+0.031204305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:31:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Oct  1 09:31:34 np0005464214 podman[262322]: 2025-10-01 13:31:34.637695193 +0000 UTC m=+1.247632785 container create b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:31:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:35 np0005464214 systemd[1]: Started libpod-conmon-b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5.scope.
Oct  1 09:31:35 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:31:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:31:36 np0005464214 podman[262322]: 2025-10-01 13:31:36.444320867 +0000 UTC m=+3.054258509 container init b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:31:36 np0005464214 podman[262322]: 2025-10-01 13:31:36.456296217 +0000 UTC m=+3.066233819 container start b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:31:36 np0005464214 gifted_morse[262338]: 167 167
Oct  1 09:31:36 np0005464214 systemd[1]: libpod-b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5.scope: Deactivated successfully.
Oct  1 09:31:36 np0005464214 podman[262322]: 2025-10-01 13:31:36.868889197 +0000 UTC m=+3.478826789 container attach b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:31:36 np0005464214 podman[262322]: 2025-10-01 13:31:36.870710335 +0000 UTC m=+3.480647937 container died b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:31:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d1c43556610d320f121b67dd01e14e6871e43dfdc6ed0a7876365b30271aee47-merged.mount: Deactivated successfully.
Oct  1 09:31:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:39 np0005464214 podman[262322]: 2025-10-01 13:31:39.019777498 +0000 UTC m=+5.629715100 container remove b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:31:39 np0005464214 systemd[1]: libpod-conmon-b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5.scope: Deactivated successfully.
Oct  1 09:31:39 np0005464214 podman[262362]: 2025-10-01 13:31:39.290233365 +0000 UTC m=+0.053971969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:31:39 np0005464214 podman[262362]: 2025-10-01 13:31:39.706424359 +0000 UTC m=+0.470162903 container create 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:31:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:31:40 np0005464214 systemd[1]: Started libpod-conmon-69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc.scope.
Oct  1 09:31:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:31:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:41 np0005464214 podman[262362]: 2025-10-01 13:31:41.026019784 +0000 UTC m=+1.789758318 container init 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:31:41 np0005464214 podman[262362]: 2025-10-01 13:31:41.037257571 +0000 UTC m=+1.800996125 container start 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:31:41 np0005464214 podman[262362]: 2025-10-01 13:31:41.484310218 +0000 UTC m=+2.248048772 container attach 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:31:42 np0005464214 heuristic_black[262378]: {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:    "0": [
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:        {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "devices": [
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "/dev/loop3"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            ],
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_name": "ceph_lv0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_size": "21470642176",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "name": "ceph_lv0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "tags": {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cluster_name": "ceph",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.crush_device_class": "",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.encrypted": "0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osd_id": "0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.type": "block",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.vdo": "0"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            },
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "type": "block",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "vg_name": "ceph_vg0"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:        }
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:    ],
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:    "1": [
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:        {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "devices": [
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "/dev/loop4"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            ],
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_name": "ceph_lv1",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_size": "21470642176",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "name": "ceph_lv1",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "tags": {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cluster_name": "ceph",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.crush_device_class": "",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.encrypted": "0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osd_id": "1",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.type": "block",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.vdo": "0"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            },
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "type": "block",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "vg_name": "ceph_vg1"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:        }
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:    ],
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:    "2": [
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:        {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "devices": [
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "/dev/loop5"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            ],
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_name": "ceph_lv2",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_size": "21470642176",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "name": "ceph_lv2",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "tags": {
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.cluster_name": "ceph",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.crush_device_class": "",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.encrypted": "0",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osd_id": "2",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.type": "block",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:                "ceph.vdo": "0"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            },
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "type": "block",
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:            "vg_name": "ceph_vg2"
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:        }
Oct  1 09:31:42 np0005464214 heuristic_black[262378]:    ]
Oct  1 09:31:42 np0005464214 heuristic_black[262378]: }
Oct  1 09:31:42 np0005464214 systemd[1]: libpod-69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc.scope: Deactivated successfully.
Oct  1 09:31:42 np0005464214 podman[262362]: 2025-10-01 13:31:42.153311728 +0000 UTC m=+2.917050272 container died 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:31:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:31:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e-merged.mount: Deactivated successfully.
Oct  1 09:31:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct  1 09:31:45 np0005464214 podman[262362]: 2025-10-01 13:31:45.161529192 +0000 UTC m=+5.925267756 container remove 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:31:45 np0005464214 systemd[1]: libpod-conmon-69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc.scope: Deactivated successfully.
Oct  1 09:31:45 np0005464214 podman[262542]: 2025-10-01 13:31:45.89862856 +0000 UTC m=+0.027975302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:31:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Oct  1 09:31:46 np0005464214 podman[262542]: 2025-10-01 13:31:46.607216359 +0000 UTC m=+0.736563081 container create e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:31:47 np0005464214 systemd[1]: Started libpod-conmon-e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996.scope.
Oct  1 09:31:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:31:47 np0005464214 podman[262542]: 2025-10-01 13:31:47.699939544 +0000 UTC m=+1.829286286 container init e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:31:47 np0005464214 podman[262542]: 2025-10-01 13:31:47.70925273 +0000 UTC m=+1.838599442 container start e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:31:47 np0005464214 ecstatic_sanderson[262559]: 167 167
Oct  1 09:31:47 np0005464214 systemd[1]: libpod-e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996.scope: Deactivated successfully.
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:31:47
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta']
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:31:47 np0005464214 podman[262542]: 2025-10-01 13:31:47.834229317 +0000 UTC m=+1.963576049 container attach e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:31:47 np0005464214 podman[262542]: 2025-10-01 13:31:47.836537401 +0000 UTC m=+1.965884143 container died e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:31:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:31:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3db817f22e3e1daec8a995e51c51035068c98fe382e89745fede2374e375f4df-merged.mount: Deactivated successfully.
Oct  1 09:31:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:48 np0005464214 podman[262542]: 2025-10-01 13:31:48.514578199 +0000 UTC m=+2.643924951 container remove e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:31:48 np0005464214 systemd[1]: libpod-conmon-e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996.scope: Deactivated successfully.
Oct  1 09:31:48 np0005464214 podman[262584]: 2025-10-01 13:31:48.674982744 +0000 UTC m=+0.030341017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:31:49 np0005464214 podman[262584]: 2025-10-01 13:31:49.052033052 +0000 UTC m=+0.407391345 container create 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:31:49 np0005464214 systemd[1]: Started libpod-conmon-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope.
Oct  1 09:31:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:31:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:31:50 np0005464214 podman[262584]: 2025-10-01 13:31:50.092903437 +0000 UTC m=+1.448261740 container init 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:31:50 np0005464214 podman[262584]: 2025-10-01 13:31:50.105193799 +0000 UTC m=+1.460552092 container start 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:31:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Oct  1 09:31:50 np0005464214 podman[262584]: 2025-10-01 13:31:50.395786696 +0000 UTC m=+1.751144979 container attach 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:31:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]: {
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "osd_id": 0,
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "type": "bluestore"
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:    },
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "osd_id": 2,
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "type": "bluestore"
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:    },
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "osd_id": 1,
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:        "type": "bluestore"
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]:    }
Oct  1 09:31:51 np0005464214 lucid_mirzakhani[262600]: }
Oct  1 09:31:51 np0005464214 systemd[1]: libpod-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope: Deactivated successfully.
Oct  1 09:31:51 np0005464214 systemd[1]: libpod-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope: Consumed 1.054s CPU time.
Oct  1 09:31:51 np0005464214 podman[262584]: 2025-10-01 13:31:51.153994785 +0000 UTC m=+2.509353048 container died 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:31:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3-merged.mount: Deactivated successfully.
Oct  1 09:31:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 0 B/s wr, 7 op/s
Oct  1 09:31:54 np0005464214 podman[262584]: 2025-10-01 13:31:54.406876905 +0000 UTC m=+5.762235188 container remove 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:31:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:31:54 np0005464214 systemd[1]: libpod-conmon-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope: Deactivated successfully.
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2707511699' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2707511699' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.312 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.336 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.336 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.337 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.337 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.338 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.371 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.372 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.372 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:31:55 np0005464214 nova_compute[260022]: 2025-10-01 13:31:55.373 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ec3c167b-9f34-4db5-a3fc-cae7b3db6187 does not exist
Oct  1 09:31:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 797e4a4e-6d79-4f0f-b5ff-bdf9638a6f15 does not exist
Oct  1 09:31:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:31:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595402553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.047 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:31:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.283 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.284 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.284 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.285 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:31:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.404 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.405 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.436 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:31:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:31:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047125485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.941 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.949 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.966 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.967 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.968 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.977 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.978 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.978 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:31:56 np0005464214 nova_compute[260022]: 2025-10-01 13:31:56.978 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:31:57 np0005464214 nova_compute[260022]: 2025-10-01 13:31:57.003 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:31:57 np0005464214 nova_compute[260022]: 2025-10-01 13:31:57.004 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:57 np0005464214 nova_compute[260022]: 2025-10-01 13:31:57.004 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:57 np0005464214 nova_compute[260022]: 2025-10-01 13:31:57.004 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:31:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:31:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:31:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Oct  1 09:32:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct  1 09:32:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Oct  1 09:32:03 np0005464214 podman[262744]: 2025-10-01 13:32:03.527246432 +0000 UTC m=+0.069075340 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  1 09:32:03 np0005464214 podman[262750]: 2025-10-01 13:32:03.536453654 +0000 UTC m=+0.059157623 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:32:03 np0005464214 podman[262745]: 2025-10-01 13:32:03.564147686 +0000 UTC m=+0.097681290 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:32:03 np0005464214 podman[262743]: 2025-10-01 13:32:03.568759383 +0000 UTC m=+0.114069331 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct  1 09:32:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Oct  1 09:32:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct  1 09:32:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Oct  1 09:32:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 15 op/s
Oct  1 09:32:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:32:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:32:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:32:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:32:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:32:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:32:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct  1 09:32:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  1 09:32:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690996490' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  1 09:32:12 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14359 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  1 09:32:12 np0005464214 ceph-mgr[75103]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 09:32:12 np0005464214 ceph-mgr[75103]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 09:32:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct  1 09:32:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct  1 09:32:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:32:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:32:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:32:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:32:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:32:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:32:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Oct  1 09:32:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct  1 09:32:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct  1 09:32:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Oct  1 09:32:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Oct  1 09:32:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Oct  1 09:32:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  1 09:32:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1087025162' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  1 09:32:30 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  1 09:32:30 np0005464214 ceph-mgr[75103]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 09:32:30 np0005464214 ceph-mgr[75103]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  1 09:32:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:34 np0005464214 podman[262831]: 2025-10-01 13:32:34.557709422 +0000 UTC m=+0.089264042 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  1 09:32:34 np0005464214 podman[262830]: 2025-10-01 13:32:34.562770603 +0000 UTC m=+0.100140138 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:32:34 np0005464214 podman[262829]: 2025-10-01 13:32:34.581214529 +0000 UTC m=+0.122650894 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:32:34 np0005464214 podman[262828]: 2025-10-01 13:32:34.591103034 +0000 UTC m=+0.137840778 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:32:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:32:47
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta']
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:32:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:32:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:54 np0005464214 nova_compute[260022]: 2025-10-01 13:32:54.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:32:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1953406059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:32:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:32:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1953406059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:32:55 np0005464214 systemd[1]: packagekit.service: Deactivated successfully.
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.343 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.489 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.489 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.490 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.490 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.490 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:32:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:32:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/945745090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:32:55 np0005464214 nova_compute[260022]: 2025-10-01 13:32:55.991 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.157 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.159 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5211MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.159 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.159 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:32:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:32:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.388 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.389 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.407 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:32:56 np0005464214 podman[263121]: 2025-10-01 13:32:56.876615805 +0000 UTC m=+0.151008727 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:32:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:32:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729839456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.909 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:32:56 np0005464214 nova_compute[260022]: 2025-10-01 13:32:56.916 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:32:57 np0005464214 podman[263121]: 2025-10-01 13:32:57.059215346 +0000 UTC m=+0.333608208 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:32:57 np0005464214 nova_compute[260022]: 2025-10-01 13:32:57.103 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:32:57 np0005464214 nova_compute[260022]: 2025-10-01 13:32:57.107 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:32:57 np0005464214 nova_compute[260022]: 2025-10-01 13:32:57.108 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:32:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:32:58 np0005464214 nova_compute[260022]: 2025-10-01 13:32:58.111 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:58 np0005464214 nova_compute[260022]: 2025-10-01 13:32:58.111 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:32:58 np0005464214 nova_compute[260022]: 2025-10-01 13:32:58.112 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:32:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:32:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:32:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:32:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:32:58 np0005464214 nova_compute[260022]: 2025-10-01 13:32:58.241 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:32:58 np0005464214 nova_compute[260022]: 2025-10-01 13:32:58.241 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:58 np0005464214 nova_compute[260022]: 2025-10-01 13:32:58.242 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:32:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:32:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8d52c35e-ddc1-4050-ade1-f3501704b1ae does not exist
Oct  1 09:32:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a6f1498-2d97-4bd5-9abf-510b7e1e4f36 does not exist
Oct  1 09:32:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ba4af2b8-f3b9-42fc-94ce-c9f42d7e9b25 does not exist
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:32:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:32:59 np0005464214 podman[263550]: 2025-10-01 13:32:59.934972944 +0000 UTC m=+0.049488165 container create 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:32:59 np0005464214 systemd[1]: Started libpod-conmon-3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed.scope.
Oct  1 09:33:00 np0005464214 podman[263550]: 2025-10-01 13:32:59.913402498 +0000 UTC m=+0.027917809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:33:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:33:00 np0005464214 podman[263550]: 2025-10-01 13:33:00.036453444 +0000 UTC m=+0.150968775 container init 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:33:00 np0005464214 podman[263550]: 2025-10-01 13:33:00.048681163 +0000 UTC m=+0.163196404 container start 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:33:00 np0005464214 podman[263550]: 2025-10-01 13:33:00.053067033 +0000 UTC m=+0.167582304 container attach 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:33:00 np0005464214 funny_hamilton[263566]: 167 167
Oct  1 09:33:00 np0005464214 systemd[1]: libpod-3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed.scope: Deactivated successfully.
Oct  1 09:33:00 np0005464214 podman[263550]: 2025-10-01 13:33:00.058427893 +0000 UTC m=+0.172943144 container died 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:33:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a9e955de163194503e6de1fc6e6d7efcca40ce5e8adebcd6f7b69c767706c2d1-merged.mount: Deactivated successfully.
Oct  1 09:33:00 np0005464214 podman[263550]: 2025-10-01 13:33:00.11771928 +0000 UTC m=+0.232234511 container remove 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:33:00 np0005464214 systemd[1]: libpod-conmon-3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed.scope: Deactivated successfully.
Oct  1 09:33:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:33:00 np0005464214 podman[263593]: 2025-10-01 13:33:00.316978062 +0000 UTC m=+0.069640099 container create 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:33:00 np0005464214 podman[263593]: 2025-10-01 13:33:00.27482563 +0000 UTC m=+0.027487727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:33:00 np0005464214 systemd[1]: Started libpod-conmon-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope.
Oct  1 09:33:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:33:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:00 np0005464214 podman[263593]: 2025-10-01 13:33:00.41812717 +0000 UTC m=+0.170789247 container init 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:33:00 np0005464214 podman[263593]: 2025-10-01 13:33:00.437715293 +0000 UTC m=+0.190377350 container start 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:33:00 np0005464214 podman[263593]: 2025-10-01 13:33:00.453604779 +0000 UTC m=+0.206266836 container attach 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:33:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:01 np0005464214 strange_merkle[263611]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:33:01 np0005464214 strange_merkle[263611]: --> relative data size: 1.0
Oct  1 09:33:01 np0005464214 strange_merkle[263611]: --> All data devices are unavailable
Oct  1 09:33:01 np0005464214 systemd[1]: libpod-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope: Deactivated successfully.
Oct  1 09:33:01 np0005464214 systemd[1]: libpod-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope: Consumed 1.197s CPU time.
Oct  1 09:33:01 np0005464214 podman[263641]: 2025-10-01 13:33:01.738878722 +0000 UTC m=+0.026507975 container died 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:33:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0-merged.mount: Deactivated successfully.
Oct  1 09:33:01 np0005464214 podman[263641]: 2025-10-01 13:33:01.876804461 +0000 UTC m=+0.164433694 container remove 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:33:01 np0005464214 systemd[1]: libpod-conmon-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope: Deactivated successfully.
Oct  1 09:33:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:02 np0005464214 podman[263796]: 2025-10-01 13:33:02.632331855 +0000 UTC m=+0.030154711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:33:02 np0005464214 podman[263796]: 2025-10-01 13:33:02.729328951 +0000 UTC m=+0.127151757 container create ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:33:02 np0005464214 systemd[1]: Started libpod-conmon-ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6.scope.
Oct  1 09:33:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:33:02 np0005464214 podman[263796]: 2025-10-01 13:33:02.869623366 +0000 UTC m=+0.267446222 container init ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:33:02 np0005464214 podman[263796]: 2025-10-01 13:33:02.882345921 +0000 UTC m=+0.280168727 container start ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:33:02 np0005464214 condescending_nightingale[263813]: 167 167
Oct  1 09:33:02 np0005464214 systemd[1]: libpod-ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6.scope: Deactivated successfully.
Oct  1 09:33:02 np0005464214 podman[263796]: 2025-10-01 13:33:02.908809923 +0000 UTC m=+0.306632790 container attach ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:33:02 np0005464214 podman[263796]: 2025-10-01 13:33:02.910268699 +0000 UTC m=+0.308091505 container died ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:33:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-153daa18da67583dca5029c9acbfb1c93530dcffbada4cee9217df1e0c0d355d-merged.mount: Deactivated successfully.
Oct  1 09:33:03 np0005464214 podman[263796]: 2025-10-01 13:33:03.055033287 +0000 UTC m=+0.452856093 container remove ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:33:03 np0005464214 systemd[1]: libpod-conmon-ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6.scope: Deactivated successfully.
Oct  1 09:33:03 np0005464214 podman[263839]: 2025-10-01 13:33:03.267198009 +0000 UTC m=+0.046872253 container create 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:33:03 np0005464214 systemd[1]: Started libpod-conmon-28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c.scope.
Oct  1 09:33:03 np0005464214 podman[263839]: 2025-10-01 13:33:03.245971933 +0000 UTC m=+0.025646167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:33:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:33:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:03 np0005464214 podman[263839]: 2025-10-01 13:33:03.372296053 +0000 UTC m=+0.151970367 container init 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:33:03 np0005464214 podman[263839]: 2025-10-01 13:33:03.389672186 +0000 UTC m=+0.169346440 container start 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:33:03 np0005464214 podman[263839]: 2025-10-01 13:33:03.40205293 +0000 UTC m=+0.181727244 container attach 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]: {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:    "0": [
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:        {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "devices": [
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "/dev/loop3"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            ],
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_name": "ceph_lv0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_size": "21470642176",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "name": "ceph_lv0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "tags": {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cluster_name": "ceph",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.crush_device_class": "",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.encrypted": "0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osd_id": "0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.type": "block",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.vdo": "0"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            },
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "type": "block",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "vg_name": "ceph_vg0"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:        }
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:    ],
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:    "1": [
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:        {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "devices": [
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "/dev/loop4"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            ],
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_name": "ceph_lv1",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_size": "21470642176",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "name": "ceph_lv1",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "tags": {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cluster_name": "ceph",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.crush_device_class": "",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.encrypted": "0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osd_id": "1",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.type": "block",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.vdo": "0"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            },
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "type": "block",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "vg_name": "ceph_vg1"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:        }
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:    ],
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:    "2": [
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:        {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "devices": [
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "/dev/loop5"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            ],
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_name": "ceph_lv2",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_size": "21470642176",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "name": "ceph_lv2",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "tags": {
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.cluster_name": "ceph",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.crush_device_class": "",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.encrypted": "0",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osd_id": "2",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.type": "block",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:                "ceph.vdo": "0"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            },
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "type": "block",
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:            "vg_name": "ceph_vg2"
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:        }
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]:    ]
Oct  1 09:33:04 np0005464214 vigilant_almeida[263855]: }
Oct  1 09:33:04 np0005464214 systemd[1]: libpod-28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c.scope: Deactivated successfully.
Oct  1 09:33:04 np0005464214 podman[263839]: 2025-10-01 13:33:04.237635861 +0000 UTC m=+1.017310115 container died 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:33:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9-merged.mount: Deactivated successfully.
Oct  1 09:33:04 np0005464214 podman[263839]: 2025-10-01 13:33:04.346539357 +0000 UTC m=+1.126213571 container remove 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:33:04 np0005464214 systemd[1]: libpod-conmon-28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c.scope: Deactivated successfully.
Oct  1 09:33:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:04 np0005464214 podman[263930]: 2025-10-01 13:33:04.730152695 +0000 UTC m=+0.071825926 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  1 09:33:04 np0005464214 podman[263929]: 2025-10-01 13:33:04.758948432 +0000 UTC m=+0.095611104 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd)
Oct  1 09:33:04 np0005464214 podman[263931]: 2025-10-01 13:33:04.77052513 +0000 UTC m=+0.100319053 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 09:33:04 np0005464214 podman[263928]: 2025-10-01 13:33:04.837645066 +0000 UTC m=+0.182210159 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.272193556 +0000 UTC m=+0.060994273 container create 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:33:05 np0005464214 systemd[1]: Started libpod-conmon-3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565.scope.
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.241630773 +0000 UTC m=+0.030431540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:33:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.387243547 +0000 UTC m=+0.176044314 container init 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.40305246 +0000 UTC m=+0.191853177 container start 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.408306037 +0000 UTC m=+0.197106794 container attach 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:33:05 np0005464214 admiring_panini[264111]: 167 167
Oct  1 09:33:05 np0005464214 systemd[1]: libpod-3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565.scope: Deactivated successfully.
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.413920145 +0000 UTC m=+0.202720882 container died 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:33:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-77ad76d600f919b5d7ef41e268918e7ca2edca47ebf28562c8ee7d8b2bde84a9-merged.mount: Deactivated successfully.
Oct  1 09:33:05 np0005464214 podman[264094]: 2025-10-01 13:33:05.466800548 +0000 UTC m=+0.255601225 container remove 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:33:05 np0005464214 systemd[1]: libpod-conmon-3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565.scope: Deactivated successfully.
Oct  1 09:33:05 np0005464214 podman[264134]: 2025-10-01 13:33:05.630596191 +0000 UTC m=+0.028254900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:33:05 np0005464214 podman[264134]: 2025-10-01 13:33:05.742597885 +0000 UTC m=+0.140256594 container create f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:33:05 np0005464214 systemd[1]: Started libpod-conmon-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope.
Oct  1 09:33:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:33:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:33:06 np0005464214 podman[264134]: 2025-10-01 13:33:06.093384288 +0000 UTC m=+0.491043047 container init f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:33:06 np0005464214 podman[264134]: 2025-10-01 13:33:06.10599129 +0000 UTC m=+0.503649969 container start f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:33:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:06 np0005464214 podman[264134]: 2025-10-01 13:33:06.24897265 +0000 UTC m=+0.646631339 container attach f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  1 09:33:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]: {
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "osd_id": 0,
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "type": "bluestore"
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:    },
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "osd_id": 2,
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "type": "bluestore"
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:    },
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "osd_id": 1,
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:        "type": "bluestore"
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]:    }
Oct  1 09:33:07 np0005464214 intelligent_mcclintock[264150]: }
Oct  1 09:33:07 np0005464214 systemd[1]: libpod-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope: Deactivated successfully.
Oct  1 09:33:07 np0005464214 systemd[1]: libpod-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope: Consumed 1.117s CPU time.
Oct  1 09:33:07 np0005464214 podman[264134]: 2025-10-01 13:33:07.217450651 +0000 UTC m=+1.615109360 container died f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:33:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3-merged.mount: Deactivated successfully.
Oct  1 09:33:07 np0005464214 podman[264134]: 2025-10-01 13:33:07.411908229 +0000 UTC m=+1.809566928 container remove f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:33:07 np0005464214 systemd[1]: libpod-conmon-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope: Deactivated successfully.
Oct  1 09:33:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:33:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:33:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:33:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:33:07 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6d69ae80-b7b7-4df9-b7c3-b015dad3bed1 does not exist
Oct  1 09:33:07 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 894b6b7f-a6fe-4b56-949c-a8d0aad55373 does not exist
Oct  1 09:33:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:33:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:33:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:33:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:33:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:33:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:33:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:33:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:33:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:33:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:33:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:33:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:33:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:33:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:33:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:35 np0005464214 podman[264255]: 2025-10-01 13:33:35.535540974 +0000 UTC m=+0.068228893 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:33:35 np0005464214 podman[264253]: 2025-10-01 13:33:35.552761741 +0000 UTC m=+0.096263544 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:33:35 np0005464214 podman[264254]: 2025-10-01 13:33:35.558277056 +0000 UTC m=+0.088074433 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid)
Oct  1 09:33:35 np0005464214 podman[264252]: 2025-10-01 13:33:35.597184055 +0000 UTC m=+0.139472450 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:33:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:33:47.105 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:33:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:33:47.107 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:33:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:33:47.109 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:33:47
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'images', 'backups', 'default.rgw.meta']
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:33:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:33:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:33:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149275023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:33:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:33:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149275023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:33:55 np0005464214 nova_compute[260022]: 2025-10-01 13:33:55.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:55 np0005464214 nova_compute[260022]: 2025-10-01 13:33:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:55 np0005464214 nova_compute[260022]: 2025-10-01 13:33:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:55 np0005464214 nova_compute[260022]: 2025-10-01 13:33:55.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:33:56 np0005464214 nova_compute[260022]: 2025-10-01 13:33:56.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:56 np0005464214 nova_compute[260022]: 2025-10-01 13:33:56.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:33:56 np0005464214 nova_compute[260022]: 2025-10-01 13:33:56.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:33:56 np0005464214 nova_compute[260022]: 2025-10-01 13:33:56.368 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:33:56 np0005464214 nova_compute[260022]: 2025-10-01 13:33:56.369 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:33:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.368 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.391 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.392 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.392 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.393 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.393 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:33:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:33:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390611866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:33:57 np0005464214 nova_compute[260022]: 2025-10-01 13:33:57.849 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.060 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.061 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.121 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.121 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.141 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:33:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:33:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:33:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279081487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.608 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.615 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.630 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.633 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:33:58 np0005464214 nova_compute[260022]: 2025-10-01 13:33:58.634 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:34:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:06 np0005464214 podman[264386]: 2025-10-01 13:34:06.56113907 +0000 UTC m=+0.084931913 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 09:34:06 np0005464214 podman[264385]: 2025-10-01 13:34:06.565214991 +0000 UTC m=+0.093965983 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 09:34:06 np0005464214 podman[264384]: 2025-10-01 13:34:06.586124086 +0000 UTC m=+0.119878007 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:34:06 np0005464214 podman[264383]: 2025-10-01 13:34:06.600819693 +0000 UTC m=+0.137576259 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 09:34:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:34:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bd509792-e458-4508-879e-a322088a4be0 does not exist
Oct  1 09:34:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2d5929be-0dcb-4a93-aad7-72def12fc9a5 does not exist
Oct  1 09:34:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f1aa3454-71bd-42d8-811f-e3cbcdd6240d does not exist
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:34:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:34:09 np0005464214 podman[264734]: 2025-10-01 13:34:09.465367175 +0000 UTC m=+0.037498485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:34:09 np0005464214 podman[264734]: 2025-10-01 13:34:09.558688674 +0000 UTC m=+0.130819944 container create 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:34:09 np0005464214 systemd[1]: Started libpod-conmon-89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2.scope.
Oct  1 09:34:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:34:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:34:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:34:10 np0005464214 podman[264734]: 2025-10-01 13:34:10.150662283 +0000 UTC m=+0.722793543 container init 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:34:10 np0005464214 podman[264734]: 2025-10-01 13:34:10.160831157 +0000 UTC m=+0.732962427 container start 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:34:10 np0005464214 clever_mclean[264750]: 167 167
Oct  1 09:34:10 np0005464214 systemd[1]: libpod-89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2.scope: Deactivated successfully.
Oct  1 09:34:10 np0005464214 podman[264734]: 2025-10-01 13:34:10.284674358 +0000 UTC m=+0.856805618 container attach 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:34:10 np0005464214 podman[264734]: 2025-10-01 13:34:10.285892317 +0000 UTC m=+0.858023557 container died 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:34:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:10 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ab7a3cfd24c1570e910cc4518f05d9fc3ef697350ccafb5bc63d02ddea356aa9-merged.mount: Deactivated successfully.
Oct  1 09:34:10 np0005464214 podman[264734]: 2025-10-01 13:34:10.673504301 +0000 UTC m=+1.245635571 container remove 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:34:10 np0005464214 systemd[1]: libpod-conmon-89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2.scope: Deactivated successfully.
Oct  1 09:34:10 np0005464214 podman[264776]: 2025-10-01 13:34:10.893658478 +0000 UTC m=+0.042654518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:34:11 np0005464214 podman[264776]: 2025-10-01 13:34:11.024403718 +0000 UTC m=+0.173399758 container create 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:34:11 np0005464214 systemd[1]: Started libpod-conmon-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope.
Oct  1 09:34:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:34:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:11 np0005464214 podman[264776]: 2025-10-01 13:34:11.375541714 +0000 UTC m=+0.524537764 container init 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:34:11 np0005464214 podman[264776]: 2025-10-01 13:34:11.388696232 +0000 UTC m=+0.537692272 container start 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:34:11 np0005464214 podman[264776]: 2025-10-01 13:34:11.470956251 +0000 UTC m=+0.619952361 container attach 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:34:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:34:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:34:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:34:12.303 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:34:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:34:12.303 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:34:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:12 np0005464214 mystifying_kapitsa[264793]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:34:12 np0005464214 mystifying_kapitsa[264793]: --> relative data size: 1.0
Oct  1 09:34:12 np0005464214 mystifying_kapitsa[264793]: --> All data devices are unavailable
Oct  1 09:34:12 np0005464214 systemd[1]: libpod-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope: Deactivated successfully.
Oct  1 09:34:12 np0005464214 systemd[1]: libpod-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope: Consumed 1.161s CPU time.
Oct  1 09:34:12 np0005464214 podman[264822]: 2025-10-01 13:34:12.644251128 +0000 UTC m=+0.028910840 container died 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:34:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88-merged.mount: Deactivated successfully.
Oct  1 09:34:12 np0005464214 podman[264822]: 2025-10-01 13:34:12.701868213 +0000 UTC m=+0.086527875 container remove 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:34:12 np0005464214 systemd[1]: libpod-conmon-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope: Deactivated successfully.
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.475557645 +0000 UTC m=+0.052701389 container create d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:34:13 np0005464214 systemd[1]: Started libpod-conmon-d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc.scope.
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.452545952 +0000 UTC m=+0.029689786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:34:13 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.576852438 +0000 UTC m=+0.153996272 container init d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.584483281 +0000 UTC m=+0.161627025 container start d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:34:13 np0005464214 jovial_williamson[264994]: 167 167
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.590264205 +0000 UTC m=+0.167407999 container attach d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:34:13 np0005464214 systemd[1]: libpod-d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc.scope: Deactivated successfully.
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.59293167 +0000 UTC m=+0.170075454 container died d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:34:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-77588ad29ab6cdb0f3cf2aae62b52abff392dea127236afe7cf62b1077ae2030-merged.mount: Deactivated successfully.
Oct  1 09:34:13 np0005464214 podman[264978]: 2025-10-01 13:34:13.653962362 +0000 UTC m=+0.231106106 container remove d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:34:13 np0005464214 systemd[1]: libpod-conmon-d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc.scope: Deactivated successfully.
Oct  1 09:34:13 np0005464214 podman[265019]: 2025-10-01 13:34:13.874583963 +0000 UTC m=+0.049158875 container create 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:34:13 np0005464214 systemd[1]: Started libpod-conmon-99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e.scope.
Oct  1 09:34:13 np0005464214 podman[265019]: 2025-10-01 13:34:13.851045793 +0000 UTC m=+0.025620725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:34:13 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:34:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:13 np0005464214 podman[265019]: 2025-10-01 13:34:13.982803627 +0000 UTC m=+0.157378559 container init 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:34:13 np0005464214 podman[265019]: 2025-10-01 13:34:13.991666859 +0000 UTC m=+0.166241771 container start 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:34:14 np0005464214 podman[265019]: 2025-10-01 13:34:14.000773579 +0000 UTC m=+0.175348521 container attach 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 09:34:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]: {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:    "0": [
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:        {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "devices": [
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "/dev/loop3"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            ],
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_name": "ceph_lv0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_size": "21470642176",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "name": "ceph_lv0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "tags": {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cluster_name": "ceph",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.crush_device_class": "",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.encrypted": "0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osd_id": "0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.type": "block",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.vdo": "0"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            },
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "type": "block",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "vg_name": "ceph_vg0"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:        }
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:    ],
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:    "1": [
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:        {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "devices": [
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "/dev/loop4"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            ],
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_name": "ceph_lv1",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_size": "21470642176",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "name": "ceph_lv1",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "tags": {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cluster_name": "ceph",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.crush_device_class": "",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.encrypted": "0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osd_id": "1",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.type": "block",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.vdo": "0"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            },
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "type": "block",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "vg_name": "ceph_vg1"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:        }
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:    ],
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:    "2": [
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:        {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "devices": [
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "/dev/loop5"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            ],
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_name": "ceph_lv2",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_size": "21470642176",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "name": "ceph_lv2",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "tags": {
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.cluster_name": "ceph",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.crush_device_class": "",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.encrypted": "0",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osd_id": "2",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.type": "block",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:                "ceph.vdo": "0"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            },
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "type": "block",
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:            "vg_name": "ceph_vg2"
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:        }
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]:    ]
Oct  1 09:34:14 np0005464214 beautiful_elgamal[265036]: }
Oct  1 09:34:14 np0005464214 systemd[1]: libpod-99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e.scope: Deactivated successfully.
Oct  1 09:34:14 np0005464214 podman[265019]: 2025-10-01 13:34:14.782782456 +0000 UTC m=+0.957357368 container died 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  1 09:34:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338-merged.mount: Deactivated successfully.
Oct  1 09:34:14 np0005464214 podman[265019]: 2025-10-01 13:34:14.860890201 +0000 UTC m=+1.035465113 container remove 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:34:14 np0005464214 systemd[1]: libpod-conmon-99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e.scope: Deactivated successfully.
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.501594031 +0000 UTC m=+0.042260016 container create 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:34:15 np0005464214 systemd[1]: Started libpod-conmon-47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac.scope.
Oct  1 09:34:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.48459545 +0000 UTC m=+0.025261445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.603562225 +0000 UTC m=+0.144228260 container init 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.611199349 +0000 UTC m=+0.151865344 container start 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:34:15 np0005464214 clever_bell[265213]: 167 167
Oct  1 09:34:15 np0005464214 systemd[1]: libpod-47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac.scope: Deactivated successfully.
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.635001246 +0000 UTC m=+0.175667321 container attach 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.635531283 +0000 UTC m=+0.176197298 container died 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:34:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c861fcd4030294f23861f972215bf0f55e641010d1faaa0ec9f26615df3376f3-merged.mount: Deactivated successfully.
Oct  1 09:34:15 np0005464214 podman[265197]: 2025-10-01 13:34:15.776682016 +0000 UTC m=+0.317348001 container remove 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:34:15 np0005464214 systemd[1]: libpod-conmon-47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac.scope: Deactivated successfully.
Oct  1 09:34:16 np0005464214 podman[265237]: 2025-10-01 13:34:16.010596939 +0000 UTC m=+0.093348612 container create 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:34:16 np0005464214 podman[265237]: 2025-10-01 13:34:15.946930923 +0000 UTC m=+0.029682596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:34:16 np0005464214 systemd[1]: Started libpod-conmon-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope.
Oct  1 09:34:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:34:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:34:16 np0005464214 podman[265237]: 2025-10-01 13:34:16.253253831 +0000 UTC m=+0.336005534 container init 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:34:16 np0005464214 podman[265237]: 2025-10-01 13:34:16.26578114 +0000 UTC m=+0.348532823 container start 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:34:16 np0005464214 podman[265237]: 2025-10-01 13:34:16.38297592 +0000 UTC m=+0.465727673 container attach 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:34:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]: {
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "osd_id": 0,
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "type": "bluestore"
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:    },
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "osd_id": 2,
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "type": "bluestore"
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:    },
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "osd_id": 1,
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:        "type": "bluestore"
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]:    }
Oct  1 09:34:17 np0005464214 sweet_bohr[265254]: }
Oct  1 09:34:17 np0005464214 systemd[1]: libpod-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope: Deactivated successfully.
Oct  1 09:34:17 np0005464214 systemd[1]: libpod-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope: Consumed 1.090s CPU time.
Oct  1 09:34:17 np0005464214 podman[265237]: 2025-10-01 13:34:17.349871531 +0000 UTC m=+1.432623184 container died 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:34:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b-merged.mount: Deactivated successfully.
Oct  1 09:34:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:34:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:34:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:34:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:34:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:34:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:34:18 np0005464214 podman[265237]: 2025-10-01 13:34:18.12351859 +0000 UTC m=+2.206270223 container remove 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:34:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:34:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:34:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:34:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:34:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fdd626ed-1066-47fa-9877-ab38ff872ac2 does not exist
Oct  1 09:34:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5bda3b85-5fa6-4e19-ae5a-261a0078cb66 does not exist
Oct  1 09:34:18 np0005464214 systemd[1]: libpod-conmon-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope: Deactivated successfully.
Oct  1 09:34:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:34:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:34:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.772613) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325671772660, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 250, "total_data_size": 3484086, "memory_usage": 3539688, "flush_reason": "Manual Compaction"}
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325671877387, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1975939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16427, "largest_seqno": 18470, "table_properties": {"data_size": 1969425, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16564, "raw_average_key_size": 20, "raw_value_size": 1954910, "raw_average_value_size": 2389, "num_data_blocks": 157, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325439, "oldest_key_time": 1759325439, "file_creation_time": 1759325671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 104847 microseconds, and 6579 cpu microseconds.
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.877457) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1975939 bytes OK
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.877491) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.898716) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.898789) EVENT_LOG_v1 {"time_micros": 1759325671898778, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.898820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3475504, prev total WAL file size 3475504, number of live WAL files 2.
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.900632) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1929KB)], [38(7724KB)]
Oct  1 09:34:31 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325671900703, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9886269, "oldest_snapshot_seqno": -1}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4450 keys, 8015062 bytes, temperature: kUnknown
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672422319, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8015062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7984241, "index_size": 18615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107281, "raw_average_key_size": 24, "raw_value_size": 7902809, "raw_average_value_size": 1775, "num_data_blocks": 792, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:34:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.422833) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8015062 bytes
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.486873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.0 rd, 15.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.5 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(9.1) write-amplify(4.1) OK, records in: 4853, records dropped: 403 output_compression: NoCompression
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.486934) EVENT_LOG_v1 {"time_micros": 1759325672486910, "job": 18, "event": "compaction_finished", "compaction_time_micros": 521375, "compaction_time_cpu_micros": 37287, "output_level": 6, "num_output_files": 1, "total_output_size": 8015062, "num_input_records": 4853, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672487776, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672489716, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.900498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.721813) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672721922, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 263, "num_deletes": 251, "total_data_size": 14510, "memory_usage": 19528, "flush_reason": "Manual Compaction"}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672737560, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 14466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18471, "largest_seqno": 18733, "table_properties": {"data_size": 12632, "index_size": 67, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4730, "raw_average_key_size": 18, "raw_value_size": 9151, "raw_average_value_size": 35, "num_data_blocks": 3, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325672, "oldest_key_time": 1759325672, "file_creation_time": 1759325672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15800 microseconds, and 1776 cpu microseconds.
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.737630) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 14466 bytes OK
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.737657) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741068) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741094) EVENT_LOG_v1 {"time_micros": 1759325672741085, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741122) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 12466, prev total WAL file size 12466, number of live WAL files 2.
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(14KB)], [41(7827KB)]
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672741724, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8029528, "oldest_snapshot_seqno": -1}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4203 keys, 6265676 bytes, temperature: kUnknown
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672841638, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6265676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6238181, "index_size": 15866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 102844, "raw_average_key_size": 24, "raw_value_size": 6162736, "raw_average_value_size": 1466, "num_data_blocks": 667, "num_entries": 4203, "num_filter_entries": 4203, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.842046) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6265676 bytes
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.848804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 62.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.6 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(988.2) write-amplify(433.1) OK, records in: 4710, records dropped: 507 output_compression: NoCompression
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.848849) EVENT_LOG_v1 {"time_micros": 1759325672848829, "job": 20, "event": "compaction_finished", "compaction_time_micros": 100036, "compaction_time_cpu_micros": 28769, "output_level": 6, "num_output_files": 1, "total_output_size": 6265676, "num_input_records": 4710, "num_output_records": 4203, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672849055, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672854076, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:34:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:37 np0005464214 podman[265355]: 2025-10-01 13:34:37.560563994 +0000 UTC m=+0.094056434 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:34:37 np0005464214 podman[265356]: 2025-10-01 13:34:37.573170166 +0000 UTC m=+0.103676131 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  1 09:34:37 np0005464214 podman[265353]: 2025-10-01 13:34:37.593924486 +0000 UTC m=+0.137098585 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct  1 09:34:37 np0005464214 podman[265354]: 2025-10-01 13:34:37.595772795 +0000 UTC m=+0.134732859 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 09:34:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:34:47
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:34:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:34:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:53 np0005464214 nova_compute[260022]: 2025-10-01 13:34:53.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:53 np0005464214 nova_compute[260022]: 2025-10-01 13:34:53.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 09:34:53 np0005464214 nova_compute[260022]: 2025-10-01 13:34:53.368 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 09:34:53 np0005464214 nova_compute[260022]: 2025-10-01 13:34:53.369 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:53 np0005464214 nova_compute[260022]: 2025-10-01 13:34:53.370 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 09:34:53 np0005464214 nova_compute[260022]: 2025-10-01 13:34:53.383 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:34:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770122084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:34:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:34:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770122084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:34:55 np0005464214 nova_compute[260022]: 2025-10-01 13:34:55.392 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:55 np0005464214 nova_compute[260022]: 2025-10-01 13:34:55.392 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:56 np0005464214 nova_compute[260022]: 2025-10-01 13:34:56.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:34:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.359 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:57 np0005464214 nova_compute[260022]: 2025-10-01 13:34:57.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:34:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.380 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.380 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:34:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:34:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84078365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:34:59 np0005464214 nova_compute[260022]: 2025-10-01 13:34:59.898 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.133 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.135 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.135 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.135 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:35:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.857 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.857 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:35:00 np0005464214 nova_compute[260022]: 2025-10-01 13:35:00.940 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.041 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.041 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.063 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.085 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.104 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:35:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:35:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3210871321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.538 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.546 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.565 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.568 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:35:01 np0005464214 nova_compute[260022]: 2025-10-01 13:35:01.568 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:35:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:08 np0005464214 podman[265484]: 2025-10-01 13:35:08.123888007 +0000 UTC m=+0.069213334 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Oct  1 09:35:08 np0005464214 podman[265491]: 2025-10-01 13:35:08.126294373 +0000 UTC m=+0.065496455 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct  1 09:35:08 np0005464214 podman[265483]: 2025-10-01 13:35:08.134107142 +0000 UTC m=+0.097811174 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  1 09:35:08 np0005464214 podman[265489]: 2025-10-01 13:35:08.158059864 +0000 UTC m=+0.100735847 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:35:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:35:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:35:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:35:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:35:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:35:12.302 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:35:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:35:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:35:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:35:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:35:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:35:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:35:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:35:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 59c67d39-745e-40a6-8d0d-6438665aaf60 does not exist
Oct  1 09:35:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3d312aae-ddbf-46fd-8933-2749ecd87eed does not exist
Oct  1 09:35:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 07f94bf8-0484-4469-9014-fd03aa6d21c8 does not exist
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:35:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:35:20 np0005464214 podman[265826]: 2025-10-01 13:35:20.114933438 +0000 UTC m=+0.095517790 container create 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:35:20 np0005464214 podman[265826]: 2025-10-01 13:35:20.057020546 +0000 UTC m=+0.037604908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:35:20 np0005464214 systemd[1]: Started libpod-conmon-9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b.scope.
Oct  1 09:35:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:35:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:20 np0005464214 podman[265826]: 2025-10-01 13:35:20.626110207 +0000 UTC m=+0.606694619 container init 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:35:20 np0005464214 podman[265826]: 2025-10-01 13:35:20.636720814 +0000 UTC m=+0.617305176 container start 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:35:20 np0005464214 eloquent_nash[265842]: 167 167
Oct  1 09:35:20 np0005464214 systemd[1]: libpod-9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b.scope: Deactivated successfully.
Oct  1 09:35:20 np0005464214 podman[265826]: 2025-10-01 13:35:20.738644107 +0000 UTC m=+0.719228429 container attach 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:35:20 np0005464214 podman[265826]: 2025-10-01 13:35:20.739187155 +0000 UTC m=+0.719771497 container died 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:35:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a48255cfc7921e33424bf7dfa4892b8be73e0a8cf7cc2cd7519775b7c583d420-merged.mount: Deactivated successfully.
Oct  1 09:35:21 np0005464214 podman[265826]: 2025-10-01 13:35:21.24024831 +0000 UTC m=+1.220832672 container remove 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:35:21 np0005464214 systemd[1]: libpod-conmon-9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b.scope: Deactivated successfully.
Oct  1 09:35:21 np0005464214 podman[265865]: 2025-10-01 13:35:21.400212711 +0000 UTC m=+0.024905343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:35:21 np0005464214 podman[265865]: 2025-10-01 13:35:21.595705732 +0000 UTC m=+0.220398324 container create 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:35:21 np0005464214 systemd[1]: Started libpod-conmon-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope.
Oct  1 09:35:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:35:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:21 np0005464214 podman[265865]: 2025-10-01 13:35:21.96237131 +0000 UTC m=+0.587063952 container init 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:35:21 np0005464214 podman[265865]: 2025-10-01 13:35:21.972841113 +0000 UTC m=+0.597533705 container start 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:35:22 np0005464214 podman[265865]: 2025-10-01 13:35:22.063917733 +0000 UTC m=+0.688610365 container attach 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:35:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:23 np0005464214 sharp_mcclintock[265881]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:35:23 np0005464214 sharp_mcclintock[265881]: --> relative data size: 1.0
Oct  1 09:35:23 np0005464214 sharp_mcclintock[265881]: --> All data devices are unavailable
Oct  1 09:35:23 np0005464214 systemd[1]: libpod-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope: Deactivated successfully.
Oct  1 09:35:23 np0005464214 systemd[1]: libpod-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope: Consumed 1.077s CPU time.
Oct  1 09:35:23 np0005464214 podman[265865]: 2025-10-01 13:35:23.09890946 +0000 UTC m=+1.723602052 container died 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  1 09:35:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f-merged.mount: Deactivated successfully.
Oct  1 09:35:23 np0005464214 podman[265865]: 2025-10-01 13:35:23.303189991 +0000 UTC m=+1.927882573 container remove 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:35:23 np0005464214 systemd[1]: libpod-conmon-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope: Deactivated successfully.
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.089215786 +0000 UTC m=+0.100421248 container create c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.025917061 +0000 UTC m=+0.037122573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:35:24 np0005464214 systemd[1]: Started libpod-conmon-c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508.scope.
Oct  1 09:35:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.273571753 +0000 UTC m=+0.284777215 container init c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.283844637 +0000 UTC m=+0.295050099 container start c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 09:35:24 np0005464214 cranky_meitner[266084]: 167 167
Oct  1 09:35:24 np0005464214 systemd[1]: libpod-c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508.scope: Deactivated successfully.
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.301225655 +0000 UTC m=+0.312431127 container attach c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.301636818 +0000 UTC m=+0.312842270 container died c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:35:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:24 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4019de7d2338c3ad157c9e10035b2d7cbddf15625408a63cdabbbb8215c6df55-merged.mount: Deactivated successfully.
Oct  1 09:35:24 np0005464214 podman[266066]: 2025-10-01 13:35:24.80279958 +0000 UTC m=+0.814005002 container remove c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:35:24 np0005464214 systemd[1]: libpod-conmon-c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508.scope: Deactivated successfully.
Oct  1 09:35:25 np0005464214 podman[266110]: 2025-10-01 13:35:25.033698795 +0000 UTC m=+0.070258338 container create 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:35:25 np0005464214 podman[266110]: 2025-10-01 13:35:24.996442159 +0000 UTC m=+0.033001702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:35:25 np0005464214 systemd[1]: Started libpod-conmon-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope.
Oct  1 09:35:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:35:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:25 np0005464214 podman[266110]: 2025-10-01 13:35:25.259109487 +0000 UTC m=+0.295669050 container init 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:35:25 np0005464214 podman[266110]: 2025-10-01 13:35:25.271876199 +0000 UTC m=+0.308435762 container start 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:35:25 np0005464214 podman[266110]: 2025-10-01 13:35:25.282780133 +0000 UTC m=+0.319339696 container attach 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:35:26 np0005464214 boring_tu[266126]: {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:    "0": [
Oct  1 09:35:26 np0005464214 boring_tu[266126]:        {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "devices": [
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "/dev/loop3"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            ],
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_name": "ceph_lv0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_size": "21470642176",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "name": "ceph_lv0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "tags": {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cluster_name": "ceph",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.crush_device_class": "",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.encrypted": "0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osd_id": "0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.type": "block",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.vdo": "0"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            },
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "type": "block",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "vg_name": "ceph_vg0"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:        }
Oct  1 09:35:26 np0005464214 boring_tu[266126]:    ],
Oct  1 09:35:26 np0005464214 boring_tu[266126]:    "1": [
Oct  1 09:35:26 np0005464214 boring_tu[266126]:        {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "devices": [
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "/dev/loop4"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            ],
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_name": "ceph_lv1",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_size": "21470642176",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "name": "ceph_lv1",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "tags": {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cluster_name": "ceph",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.crush_device_class": "",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.encrypted": "0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osd_id": "1",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.type": "block",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.vdo": "0"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            },
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "type": "block",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "vg_name": "ceph_vg1"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:        }
Oct  1 09:35:26 np0005464214 boring_tu[266126]:    ],
Oct  1 09:35:26 np0005464214 boring_tu[266126]:    "2": [
Oct  1 09:35:26 np0005464214 boring_tu[266126]:        {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "devices": [
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "/dev/loop5"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            ],
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_name": "ceph_lv2",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_size": "21470642176",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "name": "ceph_lv2",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "tags": {
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.cluster_name": "ceph",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.crush_device_class": "",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.encrypted": "0",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osd_id": "2",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.type": "block",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:                "ceph.vdo": "0"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            },
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "type": "block",
Oct  1 09:35:26 np0005464214 boring_tu[266126]:            "vg_name": "ceph_vg2"
Oct  1 09:35:26 np0005464214 boring_tu[266126]:        }
Oct  1 09:35:26 np0005464214 boring_tu[266126]:    ]
Oct  1 09:35:26 np0005464214 boring_tu[266126]: }
Oct  1 09:35:26 np0005464214 systemd[1]: libpod-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope: Deactivated successfully.
Oct  1 09:35:26 np0005464214 conmon[266126]: conmon 3febc8671aa504144c6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope/container/memory.events
Oct  1 09:35:26 np0005464214 podman[266110]: 2025-10-01 13:35:26.109999992 +0000 UTC m=+1.146559555 container died 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:35:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195-merged.mount: Deactivated successfully.
Oct  1 09:35:26 np0005464214 podman[266110]: 2025-10-01 13:35:26.723338913 +0000 UTC m=+1.759898476 container remove 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:35:26 np0005464214 systemd[1]: libpod-conmon-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope: Deactivated successfully.
Oct  1 09:35:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:27 np0005464214 podman[266286]: 2025-10-01 13:35:27.506583794 +0000 UTC m=+0.082562276 container create 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:35:27 np0005464214 podman[266286]: 2025-10-01 13:35:27.460623653 +0000 UTC m=+0.036602135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:35:27 np0005464214 systemd[1]: Started libpod-conmon-73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae.scope.
Oct  1 09:35:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:35:27 np0005464214 podman[266286]: 2025-10-01 13:35:27.863079852 +0000 UTC m=+0.439058354 container init 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:35:27 np0005464214 podman[266286]: 2025-10-01 13:35:27.875918646 +0000 UTC m=+0.451897128 container start 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  1 09:35:27 np0005464214 bold_jackson[266302]: 167 167
Oct  1 09:35:27 np0005464214 systemd[1]: libpod-73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae.scope: Deactivated successfully.
Oct  1 09:35:28 np0005464214 podman[266286]: 2025-10-01 13:35:28.031945369 +0000 UTC m=+0.607923921 container attach 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:35:28 np0005464214 podman[266286]: 2025-10-01 13:35:28.032489246 +0000 UTC m=+0.608467738 container died 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:35:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-29790dde0353d70103c2625bedddc13f334e6a6d153c25a5a6652a15fbbaa8cf-merged.mount: Deactivated successfully.
Oct  1 09:35:28 np0005464214 podman[266286]: 2025-10-01 13:35:28.423970067 +0000 UTC m=+0.999948529 container remove 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:35:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:28 np0005464214 systemd[1]: libpod-conmon-73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae.scope: Deactivated successfully.
Oct  1 09:35:28 np0005464214 podman[266329]: 2025-10-01 13:35:28.662976288 +0000 UTC m=+0.112309344 container create 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:35:28 np0005464214 podman[266329]: 2025-10-01 13:35:28.578201333 +0000 UTC m=+0.027534489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:35:28 np0005464214 systemd[1]: Started libpod-conmon-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope.
Oct  1 09:35:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:35:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:35:28 np0005464214 podman[266329]: 2025-10-01 13:35:28.906118149 +0000 UTC m=+0.355451285 container init 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:35:28 np0005464214 podman[266329]: 2025-10-01 13:35:28.913650286 +0000 UTC m=+0.362983372 container start 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:35:29 np0005464214 podman[266329]: 2025-10-01 13:35:29.004329238 +0000 UTC m=+0.453662324 container attach 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]: {
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "osd_id": 0,
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "type": "bluestore"
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:    },
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "osd_id": 2,
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "type": "bluestore"
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:    },
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "osd_id": 1,
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:        "type": "bluestore"
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]:    }
Oct  1 09:35:30 np0005464214 vibrant_napier[266348]: }
Oct  1 09:35:30 np0005464214 systemd[1]: libpod-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope: Deactivated successfully.
Oct  1 09:35:30 np0005464214 podman[266329]: 2025-10-01 13:35:30.12421571 +0000 UTC m=+1.573548766 container died 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:35:30 np0005464214 systemd[1]: libpod-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope: Consumed 1.219s CPU time.
Oct  1 09:35:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e-merged.mount: Deactivated successfully.
Oct  1 09:35:30 np0005464214 podman[266329]: 2025-10-01 13:35:30.225534597 +0000 UTC m=+1.674867673 container remove 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:35:30 np0005464214 systemd[1]: libpod-conmon-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope: Deactivated successfully.
Oct  1 09:35:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:35:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:35:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:35:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:35:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7de5d384-2cca-4f44-bc56-4b54e793cf6b does not exist
Oct  1 09:35:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 527c486e-b027-45c1-8c49-35d93bee7af1 does not exist
Oct  1 09:35:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:35:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:35:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:38 np0005464214 podman[266450]: 2025-10-01 13:35:38.549173806 +0000 UTC m=+0.083962730 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923)
Oct  1 09:35:38 np0005464214 podman[266449]: 2025-10-01 13:35:38.578295986 +0000 UTC m=+0.114094811 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:35:38 np0005464214 podman[266448]: 2025-10-01 13:35:38.57906823 +0000 UTC m=+0.115668870 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:35:38 np0005464214 podman[266447]: 2025-10-01 13:35:38.590581784 +0000 UTC m=+0.128559597 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true)
Oct  1 09:35:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:35:47
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', 'images', '.rgw.root']
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:35:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:35:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:35:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1070204012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:35:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:35:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1070204012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:35:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:35:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:35:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.568 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.569 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.569 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.569 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.588 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.588 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.589 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.589 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.590 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:58 np0005464214 nova_compute[260022]: 2025-10-01 13:35:58.590 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.377 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:35:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:35:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952629815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:35:59 np0005464214 nova_compute[260022]: 2025-10-01 13:35:59.915 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.194 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.196 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.197 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.197 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.251 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.252 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.269 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:36:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:36:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025770361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.908 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.917 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.933 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.935 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:36:00 np0005464214 nova_compute[260022]: 2025-10-01 13:36:00.936 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:36:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:01 np0005464214 nova_compute[260022]: 2025-10-01 13:36:01.938 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:01 np0005464214 nova_compute[260022]: 2025-10-01 13:36:01.938 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:02 np0005464214 nova_compute[260022]: 2025-10-01 13:36:02.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.243299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325768243344, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 992, "num_deletes": 262, "total_data_size": 1407299, "memory_usage": 1437072, "flush_reason": "Manual Compaction"}
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct  1 09:36:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325768618485, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1394562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18734, "largest_seqno": 19725, "table_properties": {"data_size": 1389678, "index_size": 2408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10069, "raw_average_key_size": 18, "raw_value_size": 1379870, "raw_average_value_size": 2541, "num_data_blocks": 110, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325672, "oldest_key_time": 1759325672, "file_creation_time": 1759325768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 375291 microseconds, and 4422 cpu microseconds.
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.618586) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1394562 bytes OK
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.618612) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.836246) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.836307) EVENT_LOG_v1 {"time_micros": 1759325768836293, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.836337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1402541, prev total WAL file size 1402541, number of live WAL files 2.
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.837136) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353039' seq:0, type:0; will stop at (end)
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1361KB)], [44(6118KB)]
Oct  1 09:36:08 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325768837211, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7660238, "oldest_snapshot_seqno": -1}
Oct  1 09:36:08 np0005464214 podman[266583]: 2025-10-01 13:36:08.849908086 +0000 UTC m=+0.084453376 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 09:36:08 np0005464214 podman[266584]: 2025-10-01 13:36:08.856825444 +0000 UTC m=+0.081237694 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923)
Oct  1 09:36:08 np0005464214 podman[266582]: 2025-10-01 13:36:08.862265936 +0000 UTC m=+0.099123069 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Oct  1 09:36:08 np0005464214 podman[266581]: 2025-10-01 13:36:08.884494167 +0000 UTC m=+0.126368688 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4210 keys, 7524904 bytes, temperature: kUnknown
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325769548244, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7524904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7495469, "index_size": 17805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104089, "raw_average_key_size": 24, "raw_value_size": 7417955, "raw_average_value_size": 1761, "num_data_blocks": 747, "num_entries": 4210, "num_filter_entries": 4210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.548609) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7524904 bytes
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.881317) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.8 rd, 10.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 4746, records dropped: 536 output_compression: NoCompression
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.881377) EVENT_LOG_v1 {"time_micros": 1759325769881354, "job": 22, "event": "compaction_finished", "compaction_time_micros": 711141, "compaction_time_cpu_micros": 23147, "output_level": 6, "num_output_files": 1, "total_output_size": 7524904, "num_input_records": 4746, "num_output_records": 4210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325769882079, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325769884413, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.837046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:36:09 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:36:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:36:12.302 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:36:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:36:12.302 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:36:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:36:12.303 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:36:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:36:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:36:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:36:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:36:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:36:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:36:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:36:31 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f47f3733-b67f-4acc-a25c-680085b564f4 does not exist
Oct  1 09:36:31 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 69b6fb71-d0da-46e0-9b2f-f878cabac4a2 does not exist
Oct  1 09:36:31 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8b8be85b-740c-4af0-8001-a76f36c0c18c does not exist
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:36:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:36:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:36:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:36:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:36:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:32 np0005464214 podman[266937]: 2025-10-01 13:36:32.368725678 +0000 UTC m=+0.042679787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:36:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:32 np0005464214 podman[266937]: 2025-10-01 13:36:32.550948117 +0000 UTC m=+0.224902186 container create 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:36:32 np0005464214 systemd[1]: Started libpod-conmon-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope.
Oct  1 09:36:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:36:32 np0005464214 podman[266937]: 2025-10-01 13:36:32.973974643 +0000 UTC m=+0.647928762 container init 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:36:32 np0005464214 podman[266937]: 2025-10-01 13:36:32.984272508 +0000 UTC m=+0.658226527 container start 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:36:32 np0005464214 adoring_khorana[266953]: 167 167
Oct  1 09:36:32 np0005464214 systemd[1]: libpod-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope: Deactivated successfully.
Oct  1 09:36:32 np0005464214 conmon[266953]: conmon 667e9011cfa518ada29f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope/container/memory.events
Oct  1 09:36:33 np0005464214 podman[266937]: 2025-10-01 13:36:33.096286332 +0000 UTC m=+0.770240391 container attach 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:36:33 np0005464214 podman[266937]: 2025-10-01 13:36:33.096805918 +0000 UTC m=+0.770759957 container died 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:36:33 np0005464214 systemd[1]: var-lib-containers-storage-overlay-40925f290af5545a0c6d599d2271ce290ea53ddcc7b4bb0bc8e6bf585d4268fd-merged.mount: Deactivated successfully.
Oct  1 09:36:34 np0005464214 podman[266937]: 2025-10-01 13:36:34.008481922 +0000 UTC m=+1.682435941 container remove 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:36:34 np0005464214 systemd[1]: libpod-conmon-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope: Deactivated successfully.
Oct  1 09:36:34 np0005464214 podman[266978]: 2025-10-01 13:36:34.298422299 +0000 UTC m=+0.126784281 container create 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:36:34 np0005464214 podman[266978]: 2025-10-01 13:36:34.216142563 +0000 UTC m=+0.044504555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:36:34 np0005464214 systemd[1]: Started libpod-conmon-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope.
Oct  1 09:36:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:36:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:34 np0005464214 podman[266978]: 2025-10-01 13:36:34.533857728 +0000 UTC m=+0.362219690 container init 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:36:34 np0005464214 podman[266978]: 2025-10-01 13:36:34.54409161 +0000 UTC m=+0.372453552 container start 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:36:34 np0005464214 podman[266978]: 2025-10-01 13:36:34.58560525 +0000 UTC m=+0.413967192 container attach 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:36:35 np0005464214 xenodochial_dijkstra[266995]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:36:35 np0005464214 xenodochial_dijkstra[266995]: --> relative data size: 1.0
Oct  1 09:36:35 np0005464214 xenodochial_dijkstra[266995]: --> All data devices are unavailable
Oct  1 09:36:35 np0005464214 systemd[1]: libpod-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope: Deactivated successfully.
Oct  1 09:36:35 np0005464214 systemd[1]: libpod-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope: Consumed 1.033s CPU time.
Oct  1 09:36:35 np0005464214 podman[266978]: 2025-10-01 13:36:35.630885749 +0000 UTC m=+1.459247701 container died 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:36:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e-merged.mount: Deactivated successfully.
Oct  1 09:36:35 np0005464214 podman[266978]: 2025-10-01 13:36:35.728497688 +0000 UTC m=+1.556859630 container remove 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:36:35 np0005464214 systemd[1]: libpod-conmon-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope: Deactivated successfully.
Oct  1 09:36:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.498976317 +0000 UTC m=+0.114038899 container create 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.407746139 +0000 UTC m=+0.022808721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:36:36 np0005464214 systemd[1]: Started libpod-conmon-38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca.scope.
Oct  1 09:36:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.686994589 +0000 UTC m=+0.302057231 container init 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.695865949 +0000 UTC m=+0.310928511 container start 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.700196835 +0000 UTC m=+0.315259417 container attach 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:36:36 np0005464214 practical_engelbart[267194]: 167 167
Oct  1 09:36:36 np0005464214 systemd[1]: libpod-38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca.scope: Deactivated successfully.
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.70412645 +0000 UTC m=+0.319189012 container died 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:36:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-edfaf09515f5a85743ce55602a9cbfea8738a929768ed36546cf16039798cef7-merged.mount: Deactivated successfully.
Oct  1 09:36:36 np0005464214 podman[267178]: 2025-10-01 13:36:36.754472248 +0000 UTC m=+0.369534800 container remove 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:36:36 np0005464214 systemd[1]: libpod-conmon-38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca.scope: Deactivated successfully.
Oct  1 09:36:36 np0005464214 podman[267218]: 2025-10-01 13:36:36.933284049 +0000 UTC m=+0.043085730 container create b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:36:36 np0005464214 systemd[1]: Started libpod-conmon-b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f.scope.
Oct  1 09:36:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:36:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:37 np0005464214 podman[267218]: 2025-10-01 13:36:36.91334159 +0000 UTC m=+0.023143301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:36:37 np0005464214 podman[267218]: 2025-10-01 13:36:37.013531281 +0000 UTC m=+0.123332982 container init b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:36:37 np0005464214 podman[267218]: 2025-10-01 13:36:37.02110794 +0000 UTC m=+0.130909621 container start b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:36:37 np0005464214 podman[267218]: 2025-10-01 13:36:37.025160678 +0000 UTC m=+0.134962349 container attach b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:36:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]: {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:    "0": [
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:        {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "devices": [
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "/dev/loop3"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            ],
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_name": "ceph_lv0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_size": "21470642176",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "name": "ceph_lv0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "tags": {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cluster_name": "ceph",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.crush_device_class": "",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.encrypted": "0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osd_id": "0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.type": "block",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.vdo": "0"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            },
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "type": "block",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "vg_name": "ceph_vg0"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:        }
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:    ],
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:    "1": [
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:        {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "devices": [
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "/dev/loop4"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            ],
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_name": "ceph_lv1",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_size": "21470642176",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "name": "ceph_lv1",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "tags": {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cluster_name": "ceph",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.crush_device_class": "",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.encrypted": "0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osd_id": "1",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.type": "block",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.vdo": "0"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            },
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "type": "block",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "vg_name": "ceph_vg1"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:        }
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:    ],
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:    "2": [
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:        {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "devices": [
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "/dev/loop5"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            ],
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_name": "ceph_lv2",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_size": "21470642176",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "name": "ceph_lv2",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "tags": {
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.cluster_name": "ceph",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.crush_device_class": "",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.encrypted": "0",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osd_id": "2",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.type": "block",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:                "ceph.vdo": "0"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            },
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "type": "block",
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:            "vg_name": "ceph_vg2"
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:        }
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]:    ]
Oct  1 09:36:37 np0005464214 affectionate_ramanujan[267234]: }
Oct  1 09:36:37 np0005464214 systemd[1]: libpod-b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f.scope: Deactivated successfully.
Oct  1 09:36:37 np0005464214 podman[267218]: 2025-10-01 13:36:37.849579229 +0000 UTC m=+0.959380910 container died b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:36:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9-merged.mount: Deactivated successfully.
Oct  1 09:36:37 np0005464214 podman[267218]: 2025-10-01 13:36:37.906162213 +0000 UTC m=+1.015963894 container remove b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:36:37 np0005464214 systemd[1]: libpod-conmon-b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f.scope: Deactivated successfully.
Oct  1 09:36:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.61003437 +0000 UTC m=+0.104212429 container create 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.533414843 +0000 UTC m=+0.027592922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:36:38 np0005464214 systemd[1]: Started libpod-conmon-0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd.scope.
Oct  1 09:36:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.715173747 +0000 UTC m=+0.209351826 container init 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.72604628 +0000 UTC m=+0.220224339 container start 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:36:38 np0005464214 happy_cartwright[267412]: 167 167
Oct  1 09:36:38 np0005464214 systemd[1]: libpod-0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd.scope: Deactivated successfully.
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.738338748 +0000 UTC m=+0.232516837 container attach 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.738873105 +0000 UTC m=+0.233051164 container died 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:36:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-43ed8d3caa1c87e8673e13b8d0aaf3076b1f353626709188a19221466d387bca-merged.mount: Deactivated successfully.
Oct  1 09:36:38 np0005464214 podman[267396]: 2025-10-01 13:36:38.775358556 +0000 UTC m=+0.269536625 container remove 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:36:38 np0005464214 systemd[1]: libpod-conmon-0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd.scope: Deactivated successfully.
Oct  1 09:36:38 np0005464214 podman[267436]: 2025-10-01 13:36:38.938341449 +0000 UTC m=+0.050882987 container create 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:36:38 np0005464214 systemd[1]: Started libpod-conmon-315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417.scope.
Oct  1 09:36:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:36:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:38 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:36:39 np0005464214 podman[267436]: 2025-10-01 13:36:39.005980952 +0000 UTC m=+0.118522490 container init 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:36:39 np0005464214 podman[267436]: 2025-10-01 13:36:38.912699759 +0000 UTC m=+0.025241347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:36:39 np0005464214 podman[267436]: 2025-10-01 13:36:39.014824571 +0000 UTC m=+0.127366079 container start 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:36:39 np0005464214 podman[267436]: 2025-10-01 13:36:39.018341592 +0000 UTC m=+0.130883100 container attach 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:36:39 np0005464214 podman[267451]: 2025-10-01 13:36:39.040636165 +0000 UTC m=+0.068780140 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 09:36:39 np0005464214 podman[267455]: 2025-10-01 13:36:39.045716546 +0000 UTC m=+0.070153105 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  1 09:36:39 np0005464214 podman[267454]: 2025-10-01 13:36:39.103627653 +0000 UTC m=+0.131710656 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923)
Oct  1 09:36:39 np0005464214 podman[267450]: 2025-10-01 13:36:39.147571709 +0000 UTC m=+0.170228601 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]: {
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "osd_id": 0,
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "type": "bluestore"
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:    },
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "osd_id": 2,
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "type": "bluestore"
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:    },
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "osd_id": 1,
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:        "type": "bluestore"
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]:    }
Oct  1 09:36:39 np0005464214 compassionate_hopper[267456]: }
Oct  1 09:36:40 np0005464214 systemd[1]: libpod-315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417.scope: Deactivated successfully.
Oct  1 09:36:40 np0005464214 podman[267436]: 2025-10-01 13:36:40.003307947 +0000 UTC m=+1.115849455 container died 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:36:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593-merged.mount: Deactivated successfully.
Oct  1 09:36:40 np0005464214 podman[267436]: 2025-10-01 13:36:40.060641217 +0000 UTC m=+1.173182715 container remove 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:36:40 np0005464214 systemd[1]: libpod-conmon-315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417.scope: Deactivated successfully.
Oct  1 09:36:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:36:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:36:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:36:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:36:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6a4bd843-b873-4285-b750-f5c5cf5d9a15 does not exist
Oct  1 09:36:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7628b70a-8f0e-4ab7-9d10-3efdbdb99eb6 does not exist
Oct  1 09:36:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:36:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:36:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:36:47
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images']
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:36:47 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:36:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:36:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092646705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:36:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:36:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092646705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:36:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:36:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:36:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:36:57 np0005464214 nova_compute[260022]: 2025-10-01 13:36:57.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:57 np0005464214 nova_compute[260022]: 2025-10-01 13:36:57.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:58 np0005464214 nova_compute[260022]: 2025-10-01 13:36:58.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:58 np0005464214 nova_compute[260022]: 2025-10-01 13:36:58.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:58 np0005464214 nova_compute[260022]: 2025-10-01 13:36:58.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:36:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.387 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.387 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:36:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:36:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174718930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.828 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.995 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.997 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.997 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:36:59 np0005464214 nova_compute[260022]: 2025-10-01 13:36:59.997 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.053 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.053 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.066 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:37:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:37:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735637092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.533 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.540 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.554 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.555 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:37:00 np0005464214 nova_compute[260022]: 2025-10-01 13:37:00.556 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:37:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:02 np0005464214 nova_compute[260022]: 2025-10-01 13:37:02.539 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:02 np0005464214 nova_compute[260022]: 2025-10-01 13:37:02.540 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:09 np0005464214 podman[267677]: 2025-10-01 13:37:09.519797322 +0000 UTC m=+0.063516295 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:37:09 np0005464214 podman[267676]: 2025-10-01 13:37:09.521826216 +0000 UTC m=+0.069876936 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid)
Oct  1 09:37:09 np0005464214 podman[267674]: 2025-10-01 13:37:09.543516341 +0000 UTC m=+0.096496046 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:37:09 np0005464214 podman[267675]: 2025-10-01 13:37:09.549357124 +0000 UTC m=+0.098254620 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct  1 09:37:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:37:12.304 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:37:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:37:12.304 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:37:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:37:12.304 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:37:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:37:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:37:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:37:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:37:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:37:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:37:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:40 np0005464214 podman[267784]: 2025-10-01 13:37:40.460905037 +0000 UTC m=+0.077692915 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:37:40 np0005464214 podman[267790]: 2025-10-01 13:37:40.474482199 +0000 UTC m=+0.076208118 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 09:37:40 np0005464214 podman[267783]: 2025-10-01 13:37:40.497042198 +0000 UTC m=+0.124676191 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:37:40 np0005464214 podman[267791]: 2025-10-01 13:37:40.497203493 +0000 UTC m=+0.098827339 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:37:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:37:41 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev dde2e831-05ce-44cd-90db-909461227f66 does not exist
Oct  1 09:37:41 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bc7d3c7b-3319-4fdf-9180-fc33914a14b0 does not exist
Oct  1 09:37:41 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 18e3d583-4794-4246-9405-984803ffc0a3 does not exist
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:37:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:37:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.360854628 +0000 UTC m=+0.105133119 container create f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.283692341 +0000 UTC m=+0.027970882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:37:42 np0005464214 systemd[1]: Started libpod-conmon-f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279.scope.
Oct  1 09:37:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:37:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.550317663 +0000 UTC m=+0.294596174 container init f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.563969158 +0000 UTC m=+0.308247619 container start f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:37:42 np0005464214 intelligent_volhard[268127]: 167 167
Oct  1 09:37:42 np0005464214 systemd[1]: libpod-f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279.scope: Deactivated successfully.
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.587005021 +0000 UTC m=+0.331283472 container attach f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.587660002 +0000 UTC m=+0.331938453 container died f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:37:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-520935e59eb2494666d50208b1b0411386a3a97c27ed95b14712d019e22726c3-merged.mount: Deactivated successfully.
Oct  1 09:37:42 np0005464214 podman[268110]: 2025-10-01 13:37:42.743269388 +0000 UTC m=+0.487547869 container remove f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:37:42 np0005464214 systemd[1]: libpod-conmon-f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279.scope: Deactivated successfully.
Oct  1 09:37:42 np0005464214 podman[268153]: 2025-10-01 13:37:42.997999812 +0000 UTC m=+0.070646372 container create ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:37:43 np0005464214 podman[268153]: 2025-10-01 13:37:42.954513856 +0000 UTC m=+0.027160476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:37:43 np0005464214 systemd[1]: Started libpod-conmon-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope.
Oct  1 09:37:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:37:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:43 np0005464214 podman[268153]: 2025-10-01 13:37:43.104115151 +0000 UTC m=+0.176761721 container init ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:37:43 np0005464214 podman[268153]: 2025-10-01 13:37:43.116405672 +0000 UTC m=+0.189052222 container start ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:43 np0005464214 podman[268153]: 2025-10-01 13:37:43.138177906 +0000 UTC m=+0.210824456 container attach ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:37:44 np0005464214 charming_brahmagupta[268169]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:37:44 np0005464214 charming_brahmagupta[268169]: --> relative data size: 1.0
Oct  1 09:37:44 np0005464214 charming_brahmagupta[268169]: --> All data devices are unavailable
Oct  1 09:37:44 np0005464214 systemd[1]: libpod-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope: Deactivated successfully.
Oct  1 09:37:44 np0005464214 podman[268153]: 2025-10-01 13:37:44.229516504 +0000 UTC m=+1.302163094 container died ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:37:44 np0005464214 systemd[1]: libpod-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope: Consumed 1.058s CPU time.
Oct  1 09:37:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d-merged.mount: Deactivated successfully.
Oct  1 09:37:44 np0005464214 podman[268153]: 2025-10-01 13:37:44.298960356 +0000 UTC m=+1.371606906 container remove ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:37:44 np0005464214 systemd[1]: libpod-conmon-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope: Deactivated successfully.
Oct  1 09:37:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:45.020493645 +0000 UTC m=+0.044690654 container create 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:37:45 np0005464214 systemd[1]: Started libpod-conmon-910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2.scope.
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:44.999852879 +0000 UTC m=+0.024049918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:37:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:45.128147574 +0000 UTC m=+0.152344613 container init 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:45.136230962 +0000 UTC m=+0.160427971 container start 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:45.140531028 +0000 UTC m=+0.164728087 container attach 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:37:45 np0005464214 gifted_heisenberg[268369]: 167 167
Oct  1 09:37:45 np0005464214 systemd[1]: libpod-910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2.scope: Deactivated successfully.
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:45.147274754 +0000 UTC m=+0.171471763 container died 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:37:45 np0005464214 systemd[1]: var-lib-containers-storage-overlay-82871bd6244f213aec242db39b19e2099c9b72c78a965538d70e14ec2bad8c7d-merged.mount: Deactivated successfully.
Oct  1 09:37:45 np0005464214 podman[268353]: 2025-10-01 13:37:45.196094559 +0000 UTC m=+0.220291568 container remove 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:37:45 np0005464214 systemd[1]: libpod-conmon-910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2.scope: Deactivated successfully.
Oct  1 09:37:45 np0005464214 podman[268395]: 2025-10-01 13:37:45.398121512 +0000 UTC m=+0.053439503 container create cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:37:45 np0005464214 systemd[1]: Started libpod-conmon-cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3.scope.
Oct  1 09:37:45 np0005464214 podman[268395]: 2025-10-01 13:37:45.371301129 +0000 UTC m=+0.026619160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:37:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:37:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:45 np0005464214 podman[268395]: 2025-10-01 13:37:45.494351098 +0000 UTC m=+0.149669099 container init cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:37:45 np0005464214 podman[268395]: 2025-10-01 13:37:45.506820985 +0000 UTC m=+0.162139006 container start cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:37:45 np0005464214 podman[268395]: 2025-10-01 13:37:45.51106663 +0000 UTC m=+0.166384601 container attach cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  1 09:37:46 np0005464214 musing_albattani[268411]: {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:    "0": [
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:        {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "devices": [
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "/dev/loop3"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            ],
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_name": "ceph_lv0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_size": "21470642176",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "name": "ceph_lv0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "tags": {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cluster_name": "ceph",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.crush_device_class": "",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.encrypted": "0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osd_id": "0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.type": "block",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.vdo": "0"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            },
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "type": "block",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "vg_name": "ceph_vg0"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:        }
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:    ],
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:    "1": [
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:        {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "devices": [
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "/dev/loop4"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            ],
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_name": "ceph_lv1",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_size": "21470642176",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "name": "ceph_lv1",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "tags": {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cluster_name": "ceph",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.crush_device_class": "",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.encrypted": "0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osd_id": "1",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.type": "block",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.vdo": "0"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            },
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "type": "block",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "vg_name": "ceph_vg1"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:        }
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:    ],
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:    "2": [
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:        {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "devices": [
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "/dev/loop5"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            ],
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_name": "ceph_lv2",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_size": "21470642176",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "name": "ceph_lv2",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "tags": {
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.cluster_name": "ceph",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.crush_device_class": "",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.encrypted": "0",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osd_id": "2",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.type": "block",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:                "ceph.vdo": "0"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            },
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "type": "block",
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:            "vg_name": "ceph_vg2"
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:        }
Oct  1 09:37:46 np0005464214 musing_albattani[268411]:    ]
Oct  1 09:37:46 np0005464214 musing_albattani[268411]: }
Oct  1 09:37:46 np0005464214 systemd[1]: libpod-cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3.scope: Deactivated successfully.
Oct  1 09:37:46 np0005464214 podman[268395]: 2025-10-01 13:37:46.328779563 +0000 UTC m=+0.984097564 container died cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6-merged.mount: Deactivated successfully.
Oct  1 09:37:46 np0005464214 podman[268395]: 2025-10-01 13:37:46.386659676 +0000 UTC m=+1.041977647 container remove cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:37:46 np0005464214 systemd[1]: libpod-conmon-cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3.scope: Deactivated successfully.
Oct  1 09:37:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.150956298 +0000 UTC m=+0.065964051 container create 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:47 np0005464214 systemd[1]: Started libpod-conmon-147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a.scope.
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.124588739 +0000 UTC m=+0.039596512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:37:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.256213891 +0000 UTC m=+0.171221704 container init 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.269905817 +0000 UTC m=+0.184913530 container start 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.274257355 +0000 UTC m=+0.189265318 container attach 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:37:47 np0005464214 reverent_johnson[268590]: 167 167
Oct  1 09:37:47 np0005464214 systemd[1]: libpod-147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a.scope: Deactivated successfully.
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.279209193 +0000 UTC m=+0.194216906 container died 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e6eb0707b7d50c4e2aa513e57b3a1377d282533727cc5b967c48e4ea7f9aa6b3-merged.mount: Deactivated successfully.
Oct  1 09:37:47 np0005464214 podman[268574]: 2025-10-01 13:37:47.325652443 +0000 UTC m=+0.240660146 container remove 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:37:47 np0005464214 systemd[1]: libpod-conmon-147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a.scope: Deactivated successfully.
Oct  1 09:37:47 np0005464214 podman[268614]: 2025-10-01 13:37:47.505565553 +0000 UTC m=+0.056622865 container create 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:37:47 np0005464214 systemd[1]: Started libpod-conmon-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope.
Oct  1 09:37:47 np0005464214 podman[268614]: 2025-10-01 13:37:47.478319175 +0000 UTC m=+0.029376517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:37:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:37:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:37:47 np0005464214 podman[268614]: 2025-10-01 13:37:47.618557061 +0000 UTC m=+0.169614413 container init 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:47 np0005464214 podman[268614]: 2025-10-01 13:37:47.627542547 +0000 UTC m=+0.178599889 container start 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:37:47 np0005464214 podman[268614]: 2025-10-01 13:37:47.639422226 +0000 UTC m=+0.190479538 container attach 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:37:47
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root']
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:37:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]: {
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "osd_id": 0,
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "type": "bluestore"
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:    },
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "osd_id": 2,
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "type": "bluestore"
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:    },
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "osd_id": 1,
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:        "type": "bluestore"
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]:    }
Oct  1 09:37:48 np0005464214 vibrant_hugle[268630]: }
Oct  1 09:37:48 np0005464214 systemd[1]: libpod-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope: Deactivated successfully.
Oct  1 09:37:48 np0005464214 systemd[1]: libpod-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope: Consumed 1.083s CPU time.
Oct  1 09:37:48 np0005464214 podman[268614]: 2025-10-01 13:37:48.701928305 +0000 UTC m=+1.252985617 container died 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:37:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9-merged.mount: Deactivated successfully.
Oct  1 09:37:48 np0005464214 podman[268614]: 2025-10-01 13:37:48.768870277 +0000 UTC m=+1.319927579 container remove 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:37:48 np0005464214 systemd[1]: libpod-conmon-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope: Deactivated successfully.
Oct  1 09:37:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:37:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:37:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:37:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a732560-0f33-4d9f-8372-514eb3f6275f does not exist
Oct  1 09:37:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 430b7261-8409-42b6-ba91-dd87474caa77 does not exist
Oct  1 09:37:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:37:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:37:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:37:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148074225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:37:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:37:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148074225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:37:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:37:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:37:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:37:58 np0005464214 nova_compute[260022]: 2025-10-01 13:37:58.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.343 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.358 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.393 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:37:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:37:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977658984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:37:59 np0005464214 nova_compute[260022]: 2025-10-01 13:37:59.869 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.049 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.051 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.051 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.052 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.125 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.125 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.140 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:38:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:38:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4579 writes, 20K keys, 4579 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4579 writes, 4579 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1270 writes, 5576 keys, 1270 commit groups, 1.0 writes per commit group, ingest: 8.38 MB, 0.01 MB/s#012Interval WAL: 1271 writes, 1271 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.4      1.88              0.08        11    0.171       0      0       0.0       0.0#012  L6      1/0    7.18 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     28.2     23.5      3.00              0.26        10    0.300     43K   5162       0.0       0.0#012 Sum      1/0    7.18 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     17.3     18.8      4.88              0.34        21    0.232     43K   5162       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.3     10.7     10.8      3.13              0.14         8    0.392     18K   1960       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     28.2     23.5      3.00              0.26        10    0.300     43K   5162       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.4      1.86              0.08        10    0.186       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 4.9 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 3.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 308.00 MB usage: 6.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00013 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(416,6.23 MB,2.02434%) FilterBlock(22,128.55 KB,0.0407578%) IndexBlock(22,240.75 KB,0.0763336%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:38:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:38:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573693310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.566 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.574 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.636 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.638 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:38:00 np0005464214 nova_compute[260022]: 2025-10-01 13:38:00.638 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:38:01 np0005464214 nova_compute[260022]: 2025-10-01 13:38:01.625 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:38:01 np0005464214 nova_compute[260022]: 2025-10-01 13:38:01.626 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:38:01 np0005464214 nova_compute[260022]: 2025-10-01 13:38:01.627 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:38:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:04 np0005464214 nova_compute[260022]: 2025-10-01 13:38:04.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:38:04 np0005464214 nova_compute[260022]: 2025-10-01 13:38:04.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:38:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:11 np0005464214 podman[268771]: 2025-10-01 13:38:11.545708806 +0000 UTC m=+0.080408552 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, managed_by=edpm_ansible, tcib_managed=true)
Oct  1 09:38:11 np0005464214 podman[268770]: 2025-10-01 13:38:11.556380767 +0000 UTC m=+0.098543870 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:38:11 np0005464214 podman[268769]: 2025-10-01 13:38:11.58256273 +0000 UTC m=+0.125498108 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  1 09:38:11 np0005464214 podman[268772]: 2025-10-01 13:38:11.582630222 +0000 UTC m=+0.106400220 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible)
Oct  1 09:38:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:38:12.306 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:38:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:38:12.306 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:38:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:38:12.306 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:38:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:38:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:38:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:38:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:38:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:38:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:38:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.722658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906722817, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1317, "num_deletes": 251, "total_data_size": 2071375, "memory_usage": 2105840, "flush_reason": "Manual Compaction"}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906768194, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2041265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19726, "largest_seqno": 21042, "table_properties": {"data_size": 2035024, "index_size": 3508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12922, "raw_average_key_size": 19, "raw_value_size": 2022541, "raw_average_value_size": 3087, "num_data_blocks": 161, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325769, "oldest_key_time": 1759325769, "file_creation_time": 1759325906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 45582 microseconds, and 7853 cpu microseconds.
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.768256) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2041265 bytes OK
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.768284) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.779923) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.779938) EVENT_LOG_v1 {"time_micros": 1759325906779933, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.779960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2065481, prev total WAL file size 2065481, number of live WAL files 2.
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.780832) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1993KB)], [47(7348KB)]
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906780928, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9566169, "oldest_snapshot_seqno": -1}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4351 keys, 7793979 bytes, temperature: kUnknown
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906902509, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7793979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7763402, "index_size": 18627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107584, "raw_average_key_size": 24, "raw_value_size": 7683108, "raw_average_value_size": 1765, "num_data_blocks": 780, "num_entries": 4351, "num_filter_entries": 4351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.902895) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7793979 bytes
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.910579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.7 rd, 64.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 4865, records dropped: 514 output_compression: NoCompression
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.910623) EVENT_LOG_v1 {"time_micros": 1759325906910604, "job": 24, "event": "compaction_finished", "compaction_time_micros": 121530, "compaction_time_cpu_micros": 21482, "output_level": 6, "num_output_files": 1, "total_output_size": 7793979, "num_input_records": 4865, "num_output_records": 4351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906911454, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906914370, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.780582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:38:26 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:38:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:42 np0005464214 podman[268853]: 2025-10-01 13:38:42.544615668 +0000 UTC m=+0.078985527 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  1 09:38:42 np0005464214 podman[268851]: 2025-10-01 13:38:42.549814033 +0000 UTC m=+0.099356396 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=multipathd)
Oct  1 09:38:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:42 np0005464214 podman[268852]: 2025-10-01 13:38:42.565343017 +0000 UTC m=+0.103485206 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  1 09:38:42 np0005464214 podman[268850]: 2025-10-01 13:38:42.567350432 +0000 UTC m=+0.110709217 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct  1 09:38:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:38:47
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct  1 09:38:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:48 np0005464214 ceph-mgr[75103]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2102413293
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:38:49 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f0a1ee33-861d-4a9d-90e8-6ffea26099de does not exist
Oct  1 09:38:49 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 61949868-af6d-42ed-9d9e-5d713c80bfba does not exist
Oct  1 09:38:49 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 68073741-b2c9-49da-8159-19d92e77b2bd does not exist
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:38:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:38:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:38:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:38:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:38:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:50 np0005464214 podman[269203]: 2025-10-01 13:38:50.619792913 +0000 UTC m=+0.028115357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:38:50 np0005464214 podman[269203]: 2025-10-01 13:38:50.828874913 +0000 UTC m=+0.237197347 container create bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:38:50 np0005464214 systemd[1]: Started libpod-conmon-bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070.scope.
Oct  1 09:38:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:38:51 np0005464214 podman[269203]: 2025-10-01 13:38:51.0900493 +0000 UTC m=+0.498371824 container init bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:38:51 np0005464214 podman[269203]: 2025-10-01 13:38:51.100492663 +0000 UTC m=+0.508815127 container start bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:38:51 np0005464214 xenodochial_fermi[269219]: 167 167
Oct  1 09:38:51 np0005464214 systemd[1]: libpod-bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070.scope: Deactivated successfully.
Oct  1 09:38:51 np0005464214 podman[269203]: 2025-10-01 13:38:51.119786998 +0000 UTC m=+0.528109532 container attach bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:38:51 np0005464214 podman[269203]: 2025-10-01 13:38:51.120394357 +0000 UTC m=+0.528716831 container died bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:38:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-05507c08c8b486be2aa641a19272566fad8149c78fc2eb02dfb3b00eadee43e0-merged.mount: Deactivated successfully.
Oct  1 09:38:51 np0005464214 podman[269203]: 2025-10-01 13:38:51.610779255 +0000 UTC m=+1.019101729 container remove bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:38:51 np0005464214 systemd[1]: libpod-conmon-bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070.scope: Deactivated successfully.
Oct  1 09:38:51 np0005464214 podman[269246]: 2025-10-01 13:38:51.844888901 +0000 UTC m=+0.062082218 container create 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:38:51 np0005464214 systemd[1]: Started libpod-conmon-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope.
Oct  1 09:38:51 np0005464214 podman[269246]: 2025-10-01 13:38:51.816231819 +0000 UTC m=+0.033425226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:38:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:38:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:51 np0005464214 podman[269246]: 2025-10-01 13:38:51.949489902 +0000 UTC m=+0.166683309 container init 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:38:51 np0005464214 podman[269246]: 2025-10-01 13:38:51.961259097 +0000 UTC m=+0.178452444 container start 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:38:51 np0005464214 podman[269246]: 2025-10-01 13:38:51.968293492 +0000 UTC m=+0.185486839 container attach 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:38:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:53 np0005464214 trusting_germain[269263]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:38:53 np0005464214 trusting_germain[269263]: --> relative data size: 1.0
Oct  1 09:38:53 np0005464214 trusting_germain[269263]: --> All data devices are unavailable
Oct  1 09:38:53 np0005464214 systemd[1]: libpod-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope: Deactivated successfully.
Oct  1 09:38:53 np0005464214 systemd[1]: libpod-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope: Consumed 1.239s CPU time.
Oct  1 09:38:53 np0005464214 podman[269246]: 2025-10-01 13:38:53.25526918 +0000 UTC m=+1.472462537 container died 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:38:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb-merged.mount: Deactivated successfully.
Oct  1 09:38:53 np0005464214 podman[269246]: 2025-10-01 13:38:53.343080687 +0000 UTC m=+1.560274044 container remove 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 09:38:53 np0005464214 systemd[1]: libpod-conmon-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope: Deactivated successfully.
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.111572913 +0000 UTC m=+0.062248034 container create 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:38:54 np0005464214 systemd[1]: Started libpod-conmon-6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be.scope.
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.088860019 +0000 UTC m=+0.039535240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:38:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.202198779 +0000 UTC m=+0.152873950 container init 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.214017525 +0000 UTC m=+0.164692646 container start 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:38:54 np0005464214 eloquent_fermi[269464]: 167 167
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.219124488 +0000 UTC m=+0.169799679 container attach 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:38:54 np0005464214 systemd[1]: libpod-6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be.scope: Deactivated successfully.
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.220950886 +0000 UTC m=+0.171626007 container died 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 09:38:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-86f7aba451db1d896ab3f75e164b65473f228118dbe10f323e89e39a07814e86-merged.mount: Deactivated successfully.
Oct  1 09:38:54 np0005464214 podman[269447]: 2025-10-01 13:38:54.268633655 +0000 UTC m=+0.219308786 container remove 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:38:54 np0005464214 systemd[1]: libpod-conmon-6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be.scope: Deactivated successfully.
Oct  1 09:38:54 np0005464214 podman[269487]: 2025-10-01 13:38:54.482808086 +0000 UTC m=+0.063761172 container create 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:38:54 np0005464214 systemd[1]: Started libpod-conmon-9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a.scope.
Oct  1 09:38:54 np0005464214 podman[269487]: 2025-10-01 13:38:54.45561673 +0000 UTC m=+0.036569876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:38:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:38:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:54 np0005464214 podman[269487]: 2025-10-01 13:38:54.586416475 +0000 UTC m=+0.167369571 container init 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  1 09:38:54 np0005464214 podman[269487]: 2025-10-01 13:38:54.602886091 +0000 UTC m=+0.183839147 container start 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:38:54 np0005464214 podman[269487]: 2025-10-01 13:38:54.60788687 +0000 UTC m=+0.188840046 container attach 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:38:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:38:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3256487368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:38:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:38:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3256487368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:38:55 np0005464214 determined_nobel[269503]: {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:    "0": [
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:        {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "devices": [
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "/dev/loop3"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            ],
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_name": "ceph_lv0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_size": "21470642176",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "name": "ceph_lv0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "tags": {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cluster_name": "ceph",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.crush_device_class": "",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.encrypted": "0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osd_id": "0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.type": "block",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.vdo": "0"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            },
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "type": "block",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "vg_name": "ceph_vg0"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:        }
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:    ],
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:    "1": [
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:        {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "devices": [
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "/dev/loop4"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            ],
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_name": "ceph_lv1",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_size": "21470642176",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "name": "ceph_lv1",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "tags": {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cluster_name": "ceph",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.crush_device_class": "",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.encrypted": "0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osd_id": "1",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.type": "block",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.vdo": "0"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            },
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "type": "block",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "vg_name": "ceph_vg1"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:        }
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:    ],
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:    "2": [
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:        {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "devices": [
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "/dev/loop5"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            ],
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_name": "ceph_lv2",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_size": "21470642176",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "name": "ceph_lv2",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "tags": {
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.cluster_name": "ceph",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.crush_device_class": "",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.encrypted": "0",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osd_id": "2",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.type": "block",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:                "ceph.vdo": "0"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            },
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "type": "block",
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:            "vg_name": "ceph_vg2"
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:        }
Oct  1 09:38:55 np0005464214 determined_nobel[269503]:    ]
Oct  1 09:38:55 np0005464214 determined_nobel[269503]: }
Oct  1 09:38:55 np0005464214 systemd[1]: libpod-9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a.scope: Deactivated successfully.
Oct  1 09:38:55 np0005464214 podman[269487]: 2025-10-01 13:38:55.425596543 +0000 UTC m=+1.006549649 container died 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:38:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f-merged.mount: Deactivated successfully.
Oct  1 09:38:55 np0005464214 podman[269487]: 2025-10-01 13:38:55.493896218 +0000 UTC m=+1.074849274 container remove 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:38:55 np0005464214 systemd[1]: libpod-conmon-9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a.scope: Deactivated successfully.
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.279651274 +0000 UTC m=+0.045669016 container create c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:38:56 np0005464214 systemd[1]: Started libpod-conmon-c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968.scope.
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.259308386 +0000 UTC m=+0.025326178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:38:56 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.377386457 +0000 UTC m=+0.143404239 container init c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.387895051 +0000 UTC m=+0.153912833 container start c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.393786109 +0000 UTC m=+0.159803891 container attach c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:38:56 np0005464214 friendly_murdock[269685]: 167 167
Oct  1 09:38:56 np0005464214 systemd[1]: libpod-c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968.scope: Deactivated successfully.
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.396976971 +0000 UTC m=+0.162994753 container died c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 09:38:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-794a62d10ff35c590681dd9fa0fe7f7b8c350f361763087ca1618052fc1ff6bb-merged.mount: Deactivated successfully.
Oct  1 09:38:56 np0005464214 podman[269668]: 2025-10-01 13:38:56.449949038 +0000 UTC m=+0.215966830 container remove c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:38:56 np0005464214 systemd[1]: libpod-conmon-c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968.scope: Deactivated successfully.
Oct  1 09:38:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:56 np0005464214 podman[269709]: 2025-10-01 13:38:56.745568813 +0000 UTC m=+0.105430719 container create 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:38:56 np0005464214 podman[269709]: 2025-10-01 13:38:56.68862598 +0000 UTC m=+0.048487936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:38:56 np0005464214 systemd[1]: Started libpod-conmon-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope.
Oct  1 09:38:56 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:38:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:56 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:38:57 np0005464214 podman[269709]: 2025-10-01 13:38:57.051872349 +0000 UTC m=+0.411734255 container init 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:38:57 np0005464214 podman[269709]: 2025-10-01 13:38:57.063765728 +0000 UTC m=+0.423627634 container start 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:38:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:38:57 np0005464214 podman[269709]: 2025-10-01 13:38:57.260373828 +0000 UTC m=+0.620235714 container attach 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:38:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:38:58 np0005464214 quirky_gates[269726]: {
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "osd_id": 0,
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "type": "bluestore"
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:    },
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "osd_id": 2,
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "type": "bluestore"
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:    },
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "osd_id": 1,
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:        "type": "bluestore"
Oct  1 09:38:58 np0005464214 quirky_gates[269726]:    }
Oct  1 09:38:58 np0005464214 quirky_gates[269726]: }
Oct  1 09:38:58 np0005464214 systemd[1]: libpod-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope: Deactivated successfully.
Oct  1 09:38:58 np0005464214 systemd[1]: libpod-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope: Consumed 1.042s CPU time.
Oct  1 09:38:58 np0005464214 podman[269709]: 2025-10-01 13:38:58.09936282 +0000 UTC m=+1.459224696 container died 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:38:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63-merged.mount: Deactivated successfully.
Oct  1 09:38:58 np0005464214 podman[269709]: 2025-10-01 13:38:58.257553228 +0000 UTC m=+1.617415144 container remove 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:38:58 np0005464214 systemd[1]: libpod-conmon-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope: Deactivated successfully.
Oct  1 09:38:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:38:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:38:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:38:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:38:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 60a1e59f-8b38-49e3-93bb-f15c0c3a694f does not exist
Oct  1 09:38:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bec68e06-672c-4338-a826-173fbd918c67 does not exist
Oct  1 09:38:58 np0005464214 nova_compute[260022]: 2025-10-01 13:38:58.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:38:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:38:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:38:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.366 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.368 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:38:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:38:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772859159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.829 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.994 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.995 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.996 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:38:59 np0005464214 nova_compute[260022]: 2025-10-01 13:38:59.996 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.063 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.064 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.077 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:39:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:39:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478792039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.505 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.510 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.531 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.532 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:39:00 np0005464214 nova_compute[260022]: 2025-10-01 13:39:00.533 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:39:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.529 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.530 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.530 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.530 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.555 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.556 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.557 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:01 np0005464214 nova_compute[260022]: 2025-10-01 13:39:01.557 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:02 np0005464214 nova_compute[260022]: 2025-10-01 13:39:02.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:02 np0005464214 nova_compute[260022]: 2025-10-01 13:39:02.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:39:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:06 np0005464214 nova_compute[260022]: 2025-10-01 13:39:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:39:12.307 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:39:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:39:12.308 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:39:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:39:12.308 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:39:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:13 np0005464214 podman[269867]: 2025-10-01 13:39:13.560785 +0000 UTC m=+0.079631747 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct  1 09:39:13 np0005464214 podman[269866]: 2025-10-01 13:39:13.586791339 +0000 UTC m=+0.112980980 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct  1 09:39:13 np0005464214 podman[269865]: 2025-10-01 13:39:13.597749268 +0000 UTC m=+0.127227944 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:39:13 np0005464214 podman[269864]: 2025-10-01 13:39:13.602588132 +0000 UTC m=+0.132750200 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:39:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:39:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:39:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:39:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:39:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:39:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:39:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:44 np0005464214 podman[269947]: 2025-10-01 13:39:44.524435656 +0000 UTC m=+0.070504907 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:39:44 np0005464214 podman[269946]: 2025-10-01 13:39:44.530372515 +0000 UTC m=+0.077966625 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct  1 09:39:44 np0005464214 podman[269948]: 2025-10-01 13:39:44.534363481 +0000 UTC m=+0.069843285 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:39:44 np0005464214 podman[269945]: 2025-10-01 13:39:44.56570526 +0000 UTC m=+0.113446994 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:39:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  1 09:39:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  1 09:39:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  1 09:39:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:39:47
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'images', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  1 09:39:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:39:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  1 09:39:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  1 09:39:48 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  1 09:39:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  1 09:39:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  1 09:39:49 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  1 09:39:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:39:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:39:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 4.9 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 757 KiB/s wr, 2 op/s
Oct  1 09:39:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  1 09:39:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  1 09:39:52 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  1 09:39:54 np0005464214 nova_compute[260022]: 2025-10-01 13:39:54.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:54 np0005464214 nova_compute[260022]: 2025-10-01 13:39:54.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 09:39:54 np0005464214 nova_compute[260022]: 2025-10-01 13:39:54.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 09:39:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 MiB/s wr, 55 op/s
Oct  1 09:39:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:39:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025663506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:39:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:39:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025663506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:39:55 np0005464214 nova_compute[260022]: 2025-10-01 13:39:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:39:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 6047 writes, 24K keys, 6047 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6047 writes, 1095 syncs, 5.52 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 297 writes, 645 keys, 297 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s#012Interval WAL: 297 writes, 143 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:39:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006657947108810315 of space, bias 1.0, pg target 0.19973841326430944 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:39:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:39:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:39:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  1 09:39:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  1 09:39:57 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  1 09:39:58 np0005464214 nova_compute[260022]: 2025-10-01 13:39:58.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.369 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.369 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:39:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:39:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170474545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:39:59 np0005464214 nova_compute[260022]: 2025-10-01 13:39:59.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.011 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.012 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.012 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.013 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.149 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.150 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:00 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5baddfde-d4dd-4b3f-9dc5-d2f2ed51a5cc does not exist
Oct  1 09:40:00 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3b1a0a70-55b3-4f5a-a503-6e4fa0bfd4e0 does not exist
Oct  1 09:40:00 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3b0ba2d3-1486-4f7f-908b-a11131ed5358 does not exist
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.235 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:40:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.6 MiB/s wr, 45 op/s
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:40:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251561042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.704 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.713 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.736 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.738 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:40:00 np0005464214 nova_compute[260022]: 2025-10-01 13:40:00.739 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:40:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:40:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7211 writes, 28K keys, 7211 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7211 writes, 1430 syncs, 5.04 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 237 writes, 477 keys, 237 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s#012Interval WAL: 237 writes, 110 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:40:00 np0005464214 podman[270462]: 2025-10-01 13:40:00.990514583 +0000 UTC m=+0.058136403 container create 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:40:01 np0005464214 systemd[1]: Started libpod-conmon-2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2.scope.
Oct  1 09:40:01 np0005464214 podman[270462]: 2025-10-01 13:40:00.963546594 +0000 UTC m=+0.031168474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:40:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:40:01 np0005464214 podman[270462]: 2025-10-01 13:40:01.102031704 +0000 UTC m=+0.169653574 container init 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:40:01 np0005464214 podman[270462]: 2025-10-01 13:40:01.11664478 +0000 UTC m=+0.184266600 container start 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:40:01 np0005464214 podman[270462]: 2025-10-01 13:40:01.121031679 +0000 UTC m=+0.188653509 container attach 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:40:01 np0005464214 inspiring_agnesi[270478]: 167 167
Oct  1 09:40:01 np0005464214 systemd[1]: libpod-2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2.scope: Deactivated successfully.
Oct  1 09:40:01 np0005464214 podman[270462]: 2025-10-01 13:40:01.127602199 +0000 UTC m=+0.195224029 container died 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:40:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-89a2945a9388f9532ef8c2e8cc201db7b36584a8eb7ff3c1b903f00f1f0877e8-merged.mount: Deactivated successfully.
Oct  1 09:40:01 np0005464214 podman[270462]: 2025-10-01 13:40:01.185140132 +0000 UTC m=+0.252761922 container remove 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:40:01 np0005464214 systemd[1]: libpod-conmon-2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2.scope: Deactivated successfully.
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  1 09:40:01 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  1 09:40:01 np0005464214 podman[270504]: 2025-10-01 13:40:01.405250042 +0000 UTC m=+0.061829561 container create 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:40:01 np0005464214 systemd[1]: Started libpod-conmon-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope.
Oct  1 09:40:01 np0005464214 podman[270504]: 2025-10-01 13:40:01.375791523 +0000 UTC m=+0.032371092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:40:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:40:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:01 np0005464214 podman[270504]: 2025-10-01 13:40:01.537205624 +0000 UTC m=+0.193785193 container init 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:40:01 np0005464214 podman[270504]: 2025-10-01 13:40:01.551878912 +0000 UTC m=+0.208458431 container start 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:40:01 np0005464214 podman[270504]: 2025-10-01 13:40:01.556090166 +0000 UTC m=+0.212669685 container attach 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:40:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.737 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.739 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.739 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.740 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:40:02 np0005464214 nice_leakey[270520]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:40:02 np0005464214 nice_leakey[270520]: --> relative data size: 1.0
Oct  1 09:40:02 np0005464214 nice_leakey[270520]: --> All data devices are unavailable
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.759 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.760 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:02 np0005464214 nova_compute[260022]: 2025-10-01 13:40:02.761 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:02 np0005464214 systemd[1]: libpod-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope: Deactivated successfully.
Oct  1 09:40:02 np0005464214 systemd[1]: libpod-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope: Consumed 1.198s CPU time.
Oct  1 09:40:02 np0005464214 podman[270504]: 2025-10-01 13:40:02.792151704 +0000 UTC m=+1.448731213 container died 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:40:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077-merged.mount: Deactivated successfully.
Oct  1 09:40:02 np0005464214 podman[270504]: 2025-10-01 13:40:02.875450496 +0000 UTC m=+1.532030015 container remove 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:40:02 np0005464214 systemd[1]: libpod-conmon-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope: Deactivated successfully.
Oct  1 09:40:03 np0005464214 nova_compute[260022]: 2025-10-01 13:40:03.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.56757707 +0000 UTC m=+0.061769938 container create b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:40:03 np0005464214 systemd[1]: Started libpod-conmon-b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a.scope.
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.540710154 +0000 UTC m=+0.034903102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:40:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.695130822 +0000 UTC m=+0.189323730 container init b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.707017401 +0000 UTC m=+0.201210259 container start b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct  1 09:40:03 np0005464214 zen_beaver[270720]: 167 167
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.712098563 +0000 UTC m=+0.206291481 container attach b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:40:03 np0005464214 systemd[1]: libpod-b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a.scope: Deactivated successfully.
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.715847632 +0000 UTC m=+0.210040500 container died b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:40:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e1eb332d16cc4de876a794bac2fb6af2c2ebf3c36cd8ad0980861c793e1fcfec-merged.mount: Deactivated successfully.
Oct  1 09:40:03 np0005464214 podman[270704]: 2025-10-01 13:40:03.778456376 +0000 UTC m=+0.272649254 container remove b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:40:03 np0005464214 systemd[1]: libpod-conmon-b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a.scope: Deactivated successfully.
Oct  1 09:40:04 np0005464214 podman[270745]: 2025-10-01 13:40:04.005136985 +0000 UTC m=+0.068877604 container create 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:40:04 np0005464214 systemd[1]: Started libpod-conmon-1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f.scope.
Oct  1 09:40:04 np0005464214 podman[270745]: 2025-10-01 13:40:03.979373384 +0000 UTC m=+0.043114073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:40:04 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:40:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:04 np0005464214 podman[270745]: 2025-10-01 13:40:04.117256736 +0000 UTC m=+0.180997335 container init 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:40:04 np0005464214 podman[270745]: 2025-10-01 13:40:04.129278159 +0000 UTC m=+0.193018758 container start 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:40:04 np0005464214 podman[270745]: 2025-10-01 13:40:04.132571814 +0000 UTC m=+0.196312413 container attach 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:40:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  1 09:40:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  1 09:40:04 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  1 09:40:04 np0005464214 nova_compute[260022]: 2025-10-01 13:40:04.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:04 np0005464214 nova_compute[260022]: 2025-10-01 13:40:04.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:40:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 KiB/s wr, 40 op/s
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]: {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:    "0": [
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:        {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "devices": [
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "/dev/loop3"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            ],
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_name": "ceph_lv0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_size": "21470642176",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "name": "ceph_lv0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "tags": {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cluster_name": "ceph",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.crush_device_class": "",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.encrypted": "0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osd_id": "0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.type": "block",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.vdo": "0"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            },
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "type": "block",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "vg_name": "ceph_vg0"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:        }
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:    ],
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:    "1": [
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:        {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "devices": [
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "/dev/loop4"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            ],
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_name": "ceph_lv1",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_size": "21470642176",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "name": "ceph_lv1",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "tags": {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cluster_name": "ceph",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.crush_device_class": "",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.encrypted": "0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osd_id": "1",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.type": "block",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.vdo": "0"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            },
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "type": "block",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "vg_name": "ceph_vg1"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:        }
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:    ],
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:    "2": [
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:        {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "devices": [
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "/dev/loop5"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            ],
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_name": "ceph_lv2",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_size": "21470642176",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "name": "ceph_lv2",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "tags": {
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.cluster_name": "ceph",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.crush_device_class": "",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.encrypted": "0",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osd_id": "2",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.type": "block",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:                "ceph.vdo": "0"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            },
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "type": "block",
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:            "vg_name": "ceph_vg2"
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:        }
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]:    ]
Oct  1 09:40:04 np0005464214 zen_agnesi[270762]: }
Oct  1 09:40:04 np0005464214 systemd[1]: libpod-1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f.scope: Deactivated successfully.
Oct  1 09:40:04 np0005464214 podman[270745]: 2025-10-01 13:40:04.933818253 +0000 UTC m=+0.997558852 container died 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:40:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189-merged.mount: Deactivated successfully.
Oct  1 09:40:05 np0005464214 podman[270745]: 2025-10-01 13:40:05.00814547 +0000 UTC m=+1.071886079 container remove 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:40:05 np0005464214 systemd[1]: libpod-conmon-1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f.scope: Deactivated successfully.
Oct  1 09:40:05 np0005464214 nova_compute[260022]: 2025-10-01 13:40:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:05 np0005464214 nova_compute[260022]: 2025-10-01 13:40:05.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 09:40:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:40:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6049 writes, 25K keys, 6049 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6049 writes, 1072 syncs, 5.64 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 414 writes, 984 keys, 414 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s#012Interval WAL: 414 writes, 197 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.827217686 +0000 UTC m=+0.057257124 container create bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:40:05 np0005464214 systemd[1]: Started libpod-conmon-bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf.scope.
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.808702007 +0000 UTC m=+0.038741475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:40:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.93691799 +0000 UTC m=+0.166957448 container init bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.944088229 +0000 UTC m=+0.174127667 container start bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.947608211 +0000 UTC m=+0.177647679 container attach bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:40:05 np0005464214 stoic_feistel[270941]: 167 167
Oct  1 09:40:05 np0005464214 systemd[1]: libpod-bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf.scope: Deactivated successfully.
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.951483894 +0000 UTC m=+0.181523332 container died bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:40:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5d3408bf2eda516e3cf720f5278e52b03c2e2398e9bf6cc5845d2afd5f488fa8-merged.mount: Deactivated successfully.
Oct  1 09:40:05 np0005464214 podman[270925]: 2025-10-01 13:40:05.991707236 +0000 UTC m=+0.221746674 container remove bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:40:06 np0005464214 systemd[1]: libpod-conmon-bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf.scope: Deactivated successfully.
Oct  1 09:40:06 np0005464214 podman[270963]: 2025-10-01 13:40:06.193787401 +0000 UTC m=+0.059791565 container create 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:40:06 np0005464214 systemd[1]: Started libpod-conmon-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope.
Oct  1 09:40:06 np0005464214 podman[270963]: 2025-10-01 13:40:06.165259083 +0000 UTC m=+0.031263297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:40:06 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:40:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:40:06 np0005464214 podman[270963]: 2025-10-01 13:40:06.302251976 +0000 UTC m=+0.168256170 container init 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 09:40:06 np0005464214 podman[270963]: 2025-10-01 13:40:06.315958993 +0000 UTC m=+0.181963117 container start 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:40:06 np0005464214 podman[270963]: 2025-10-01 13:40:06.319926089 +0000 UTC m=+0.185930303 container attach 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:40:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct  1 09:40:07 np0005464214 nova_compute[260022]: 2025-10-01 13:40:07.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]: {
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "osd_id": 0,
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "type": "bluestore"
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:    },
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "osd_id": 2,
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "type": "bluestore"
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:    },
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "osd_id": 1,
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:        "type": "bluestore"
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]:    }
Oct  1 09:40:07 np0005464214 zen_rhodes[270979]: }
Oct  1 09:40:07 np0005464214 systemd[1]: libpod-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope: Deactivated successfully.
Oct  1 09:40:07 np0005464214 systemd[1]: libpod-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope: Consumed 1.102s CPU time.
Oct  1 09:40:07 np0005464214 podman[270963]: 2025-10-01 13:40:07.411407322 +0000 UTC m=+1.277411496 container died 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:40:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4-merged.mount: Deactivated successfully.
Oct  1 09:40:07 np0005464214 podman[270963]: 2025-10-01 13:40:07.47478244 +0000 UTC m=+1.340786574 container remove 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:40:07 np0005464214 systemd[1]: libpod-conmon-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope: Deactivated successfully.
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:07 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5255da47-4a03-4c95-87e4-977bca3dc2c8 does not exist
Oct  1 09:40:07 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9f20ce1c-8ad6-484e-b30e-7edd75c67177 does not exist
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:40:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 09:40:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 68 op/s
Oct  1 09:40:09 np0005464214 nova_compute[260022]: 2025-10-01 13:40:09.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Oct  1 09:40:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct  1 09:40:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:40:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:40:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:40:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:40:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:40:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:40:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct  1 09:40:12 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct  1 09:40:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 09:40:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 09:40:15 np0005464214 podman[271099]: 2025-10-01 13:40:15.537693614 +0000 UTC m=+0.077094667 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Oct  1 09:40:15 np0005464214 podman[271098]: 2025-10-01 13:40:15.541812725 +0000 UTC m=+0.081345362 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:40:15 np0005464214 podman[271097]: 2025-10-01 13:40:15.570834879 +0000 UTC m=+0.114483397 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:40:15 np0005464214 podman[271096]: 2025-10-01 13:40:15.588686778 +0000 UTC m=+0.138125260 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 09:40:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct  1 09:40:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct  1 09:40:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct  1 09:40:16 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct  1 09:40:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:40:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:40:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:40:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:40:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:40:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:40:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  1 09:40:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Oct  1 09:40:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:40:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:40:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:40:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  1 09:40:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:46 np0005464214 podman[271184]: 2025-10-01 13:40:46.524064015 +0000 UTC m=+0.069745272 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 09:40:46 np0005464214 podman[271183]: 2025-10-01 13:40:46.549935798 +0000 UTC m=+0.101211243 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 09:40:46 np0005464214 podman[271182]: 2025-10-01 13:40:46.560564548 +0000 UTC m=+0.111560015 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:40:46 np0005464214 podman[271185]: 2025-10-01 13:40:46.562446907 +0000 UTC m=+0.103049012 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:40:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:40:47
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms']
Oct  1 09:40:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:40:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:49 np0005464214 nova_compute[260022]: 2025-10-01 13:40:49.733 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:40:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3975874512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:40:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:40:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3975874512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:40:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:40:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:40:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:40:58 np0005464214 nova_compute[260022]: 2025-10-01 13:40:58.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:40:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:40:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514629967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.796 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.983 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.985 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.985 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:40:59 np0005464214 nova_compute[260022]: 2025-10-01 13:40:59.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.042 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.043 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.137 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.213 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.214 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.228 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.248 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.263 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:41:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:41:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507632771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.700 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.705 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.719 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.721 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:41:00 np0005464214 nova_compute[260022]: 2025-10-01 13:41:00.721 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:41:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:02 np0005464214 nova_compute[260022]: 2025-10-01 13:41:02.718 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:03 np0005464214 nova_compute[260022]: 2025-10-01 13:41:03.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:03 np0005464214 nova_compute[260022]: 2025-10-01 13:41:03.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:41:03 np0005464214 nova_compute[260022]: 2025-10-01 13:41:03.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:41:03 np0005464214 nova_compute[260022]: 2025-10-01 13:41:03.358 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:41:03 np0005464214 nova_compute[260022]: 2025-10-01 13:41:03.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:04 np0005464214 nova_compute[260022]: 2025-10-01 13:41:04.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:05 np0005464214 nova_compute[260022]: 2025-10-01 13:41:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:05 np0005464214 nova_compute[260022]: 2025-10-01 13:41:05.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:05 np0005464214 nova_compute[260022]: 2025-10-01 13:41:05.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:41:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:08 np0005464214 nova_compute[260022]: 2025-10-01 13:41:08.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:41:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.52054086 +0000 UTC m=+0.062699327 container create 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:41:09 np0005464214 systemd[1]: Started libpod-conmon-4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d.scope.
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.488724288 +0000 UTC m=+0.030882815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.624539653 +0000 UTC m=+0.166698120 container init 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.636903417 +0000 UTC m=+0.179061884 container start 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.640669336 +0000 UTC m=+0.182827803 container attach 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:09 np0005464214 inspiring_davinci[271595]: 167 167
Oct  1 09:41:09 np0005464214 systemd[1]: libpod-4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d.scope: Deactivated successfully.
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.647344889 +0000 UTC m=+0.189503346 container died 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:41:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2f222edc7aa2da8b2f5b6ad4fab448c06a627ad6eefaed21726a9d69ab36fa91-merged.mount: Deactivated successfully.
Oct  1 09:41:09 np0005464214 podman[271579]: 2025-10-01 13:41:09.700216653 +0000 UTC m=+0.242375120 container remove 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:41:09 np0005464214 systemd[1]: libpod-conmon-4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d.scope: Deactivated successfully.
Oct  1 09:41:09 np0005464214 podman[271621]: 2025-10-01 13:41:09.953692427 +0000 UTC m=+0.059693163 container create 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:41:10 np0005464214 systemd[1]: Started libpod-conmon-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope.
Oct  1 09:41:10 np0005464214 podman[271621]: 2025-10-01 13:41:09.931665554 +0000 UTC m=+0.037666300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:10 np0005464214 podman[271621]: 2025-10-01 13:41:10.07720569 +0000 UTC m=+0.183206516 container init 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:41:10 np0005464214 podman[271621]: 2025-10-01 13:41:10.091427022 +0000 UTC m=+0.197427758 container start 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  1 09:41:10 np0005464214 podman[271621]: 2025-10-01 13:41:10.095796932 +0000 UTC m=+0.201797738 container attach 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:41:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]: [
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:    {
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "available": false,
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "ceph_device": false,
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "lsm_data": {},
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "lvs": [],
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "path": "/dev/sr0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "rejected_reasons": [
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "Has a FileSystem",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "Insufficient space (<5GB)"
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        ],
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        "sys_api": {
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "actuators": null,
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "device_nodes": "sr0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "devname": "sr0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "human_readable_size": "482.00 KB",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "id_bus": "ata",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "model": "QEMU DVD-ROM",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "nr_requests": "2",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "parent": "/dev/sr0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "partitions": {},
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "path": "/dev/sr0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "removable": "1",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "rev": "2.5+",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "ro": "0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "rotational": "0",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "sas_address": "",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "sas_device_handle": "",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "scheduler_mode": "mq-deadline",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "sectors": 0,
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "sectorsize": "2048",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "size": 493568.0,
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "support_discard": "2048",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "type": "disk",
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:            "vendor": "QEMU"
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:        }
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]:    }
Oct  1 09:41:11 np0005464214 sharp_shamir[271638]: ]
Oct  1 09:41:11 np0005464214 systemd[1]: libpod-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope: Deactivated successfully.
Oct  1 09:41:11 np0005464214 systemd[1]: libpod-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope: Consumed 1.637s CPU time.
Oct  1 09:41:11 np0005464214 podman[271621]: 2025-10-01 13:41:11.647604556 +0000 UTC m=+1.753605282 container died 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:41:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60-merged.mount: Deactivated successfully.
Oct  1 09:41:11 np0005464214 podman[271621]: 2025-10-01 13:41:11.720400914 +0000 UTC m=+1.826401620 container remove 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:41:11 np0005464214 systemd[1]: libpod-conmon-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope: Deactivated successfully.
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 41aec1a5-43e8-479d-a1f3-46e65fe4dffc does not exist
Oct  1 09:41:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 18819c11-1d72-47eb-910f-9866454ad414 does not exist
Oct  1 09:41:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 54bda237-d917-449a-8cb3-f9d3382fb447 does not exist
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:41:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:41:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:41:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:41:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:41:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:41:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:41:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.542355952 +0000 UTC m=+0.062723279 container create 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:41:12 np0005464214 systemd[1]: Started libpod-conmon-967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b.scope.
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.510240959 +0000 UTC m=+0.030608386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.630378816 +0000 UTC m=+0.150746163 container init 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:41:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.639042791 +0000 UTC m=+0.159410148 container start 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.643120322 +0000 UTC m=+0.163487689 container attach 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:12 np0005464214 pensive_brattain[273820]: 167 167
Oct  1 09:41:12 np0005464214 systemd[1]: libpod-967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b.scope: Deactivated successfully.
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.647397167 +0000 UTC m=+0.167764534 container died 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  1 09:41:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-806efc5dfc4bf1779384948f0d399d3e87c73196df95c30f57050b0593ba53c6-merged.mount: Deactivated successfully.
Oct  1 09:41:12 np0005464214 podman[273803]: 2025-10-01 13:41:12.695895742 +0000 UTC m=+0.216263109 container remove 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:41:12 np0005464214 systemd[1]: libpod-conmon-967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b.scope: Deactivated successfully.
Oct  1 09:41:12 np0005464214 podman[273844]: 2025-10-01 13:41:12.894676363 +0000 UTC m=+0.057874644 container create ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:41:12 np0005464214 systemd[1]: Started libpod-conmon-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope.
Oct  1 09:41:12 np0005464214 podman[273844]: 2025-10-01 13:41:12.867757896 +0000 UTC m=+0.030956247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:12 np0005464214 podman[273844]: 2025-10-01 13:41:12.996313741 +0000 UTC m=+0.159512042 container init ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:41:13 np0005464214 podman[273844]: 2025-10-01 13:41:13.003929173 +0000 UTC m=+0.167127484 container start ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:41:13 np0005464214 podman[273844]: 2025-10-01 13:41:13.008671814 +0000 UTC m=+0.171870085 container attach ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:41:14 np0005464214 fervent_mirzakhani[273860]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:41:14 np0005464214 fervent_mirzakhani[273860]: --> relative data size: 1.0
Oct  1 09:41:14 np0005464214 fervent_mirzakhani[273860]: --> All data devices are unavailable
Oct  1 09:41:14 np0005464214 systemd[1]: libpod-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope: Deactivated successfully.
Oct  1 09:41:14 np0005464214 systemd[1]: libpod-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope: Consumed 1.110s CPU time.
Oct  1 09:41:14 np0005464214 podman[273844]: 2025-10-01 13:41:14.154783416 +0000 UTC m=+1.317981717 container died ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:41:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4-merged.mount: Deactivated successfully.
Oct  1 09:41:14 np0005464214 podman[273844]: 2025-10-01 13:41:14.225298342 +0000 UTC m=+1.388496623 container remove ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:41:14 np0005464214 systemd[1]: libpod-conmon-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope: Deactivated successfully.
Oct  1 09:41:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.055171802 +0000 UTC m=+0.057903374 container create 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:41:15 np0005464214 systemd[1]: Started libpod-conmon-54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b.scope.
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.027433249 +0000 UTC m=+0.030164871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.150497739 +0000 UTC m=+0.153229341 container init 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.162191041 +0000 UTC m=+0.164922613 container start 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.166796818 +0000 UTC m=+0.169528380 container attach 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:41:15 np0005464214 blissful_jepsen[274059]: 167 167
Oct  1 09:41:15 np0005464214 systemd[1]: libpod-54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b.scope: Deactivated successfully.
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.170424103 +0000 UTC m=+0.173155695 container died 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:41:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c4e35b69d0b62672e084c743300a0c5c0d31e17b93e1e20e4dcf60151b68a5f1-merged.mount: Deactivated successfully.
Oct  1 09:41:15 np0005464214 podman[274043]: 2025-10-01 13:41:15.225161467 +0000 UTC m=+0.227893039 container remove 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:41:15 np0005464214 systemd[1]: libpod-conmon-54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b.scope: Deactivated successfully.
Oct  1 09:41:15 np0005464214 podman[274084]: 2025-10-01 13:41:15.440539007 +0000 UTC m=+0.053321880 container create d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:41:15 np0005464214 systemd[1]: Started libpod-conmon-d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421.scope.
Oct  1 09:41:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:15 np0005464214 podman[274084]: 2025-10-01 13:41:15.418834085 +0000 UTC m=+0.031616968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:15 np0005464214 podman[274084]: 2025-10-01 13:41:15.530824971 +0000 UTC m=+0.143607894 container init d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:15 np0005464214 podman[274084]: 2025-10-01 13:41:15.547107881 +0000 UTC m=+0.159890764 container start d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:41:15 np0005464214 podman[274084]: 2025-10-01 13:41:15.563670788 +0000 UTC m=+0.176453711 container attach d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]: {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:    "0": [
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:        {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "devices": [
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "/dev/loop3"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            ],
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_name": "ceph_lv0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_size": "21470642176",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "name": "ceph_lv0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "tags": {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cluster_name": "ceph",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.crush_device_class": "",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.encrypted": "0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osd_id": "0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.type": "block",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.vdo": "0"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            },
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "type": "block",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "vg_name": "ceph_vg0"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:        }
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:    ],
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:    "1": [
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:        {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "devices": [
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "/dev/loop4"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            ],
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_name": "ceph_lv1",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_size": "21470642176",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "name": "ceph_lv1",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "tags": {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cluster_name": "ceph",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.crush_device_class": "",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.encrypted": "0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osd_id": "1",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.type": "block",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.vdo": "0"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            },
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "type": "block",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "vg_name": "ceph_vg1"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:        }
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:    ],
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:    "2": [
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:        {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "devices": [
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "/dev/loop5"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            ],
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_name": "ceph_lv2",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_size": "21470642176",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "name": "ceph_lv2",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "tags": {
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.cluster_name": "ceph",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.crush_device_class": "",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.encrypted": "0",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osd_id": "2",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.type": "block",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:                "ceph.vdo": "0"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            },
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "type": "block",
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:            "vg_name": "ceph_vg2"
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:        }
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]:    ]
Oct  1 09:41:16 np0005464214 charming_wilbur[274101]: }
Oct  1 09:41:16 np0005464214 systemd[1]: libpod-d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421.scope: Deactivated successfully.
Oct  1 09:41:16 np0005464214 podman[274084]: 2025-10-01 13:41:16.474493686 +0000 UTC m=+1.087276589 container died d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a-merged.mount: Deactivated successfully.
Oct  1 09:41:16 np0005464214 podman[274084]: 2025-10-01 13:41:16.529941233 +0000 UTC m=+1.142724076 container remove d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:41:16 np0005464214 systemd[1]: libpod-conmon-d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421.scope: Deactivated successfully.
Oct  1 09:41:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:41:16 np0005464214 podman[274127]: 2025-10-01 13:41:16.661286176 +0000 UTC m=+0.073926726 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd)
Oct  1 09:41:16 np0005464214 podman[274125]: 2025-10-01 13:41:16.671746919 +0000 UTC m=+0.095064919 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:16 np0005464214 podman[274189]: 2025-10-01 13:41:16.772851929 +0000 UTC m=+0.071626773 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  1 09:41:16 np0005464214 podman[274188]: 2025-10-01 13:41:16.830220146 +0000 UTC m=+0.137149929 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923)
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.309361806 +0000 UTC m=+0.057499492 container create 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:41:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:17 np0005464214 systemd[1]: Started libpod-conmon-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope.
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.28686379 +0000 UTC m=+0.035001566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.445686379 +0000 UTC m=+0.193824145 container init 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.458492326 +0000 UTC m=+0.206630032 container start 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.463051381 +0000 UTC m=+0.211189147 container attach 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:41:17 np0005464214 stupefied_jemison[274366]: 167 167
Oct  1 09:41:17 np0005464214 systemd[1]: libpod-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope: Deactivated successfully.
Oct  1 09:41:17 np0005464214 conmon[274366]: conmon 7dac978be191e49b9120 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope/container/memory.events
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.469210487 +0000 UTC m=+0.217348233 container died 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:41:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ed1d2a9b95535c15b5330ead9cc75967cfb42ccf3d84dd112d9fb1d57df1b71d-merged.mount: Deactivated successfully.
Oct  1 09:41:17 np0005464214 podman[274349]: 2025-10-01 13:41:17.555976131 +0000 UTC m=+0.304113847 container remove 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:41:17 np0005464214 systemd[1]: libpod-conmon-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope: Deactivated successfully.
Oct  1 09:41:17 np0005464214 podman[274389]: 2025-10-01 13:41:17.814092222 +0000 UTC m=+0.063947848 container create 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:41:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:41:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:41:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:41:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:41:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:41:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:41:17 np0005464214 systemd[1]: Started libpod-conmon-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope.
Oct  1 09:41:17 np0005464214 podman[274389]: 2025-10-01 13:41:17.789447257 +0000 UTC m=+0.039302873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:41:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:41:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:41:17 np0005464214 podman[274389]: 2025-10-01 13:41:17.915999658 +0000 UTC m=+0.165855314 container init 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:41:17 np0005464214 podman[274389]: 2025-10-01 13:41:17.930664564 +0000 UTC m=+0.180520190 container start 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:41:17 np0005464214 podman[274389]: 2025-10-01 13:41:17.934553188 +0000 UTC m=+0.184408814 container attach 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:41:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]: {
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "osd_id": 0,
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "type": "bluestore"
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:    },
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "osd_id": 2,
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "type": "bluestore"
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:    },
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "osd_id": 1,
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:        "type": "bluestore"
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]:    }
Oct  1 09:41:19 np0005464214 kind_satoshi[274406]: }
Oct  1 09:41:19 np0005464214 systemd[1]: libpod-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope: Deactivated successfully.
Oct  1 09:41:19 np0005464214 systemd[1]: libpod-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope: Consumed 1.125s CPU time.
Oct  1 09:41:19 np0005464214 podman[274439]: 2025-10-01 13:41:19.102712033 +0000 UTC m=+0.038244760 container died 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:41:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111-merged.mount: Deactivated successfully.
Oct  1 09:41:19 np0005464214 podman[274439]: 2025-10-01 13:41:19.185480759 +0000 UTC m=+0.121013436 container remove 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:41:19 np0005464214 systemd[1]: libpod-conmon-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope: Deactivated successfully.
Oct  1 09:41:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:41:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:41:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 84a3e75f-54fc-48d1-baa0-dfd2083e3188 does not exist
Oct  1 09:41:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7a4718d2-02ba-4cb0-a94b-c99ba93ef0e9 does not exist
Oct  1 09:41:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:41:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:41:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:41:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:47 np0005464214 podman[274507]: 2025-10-01 13:41:47.533114296 +0000 UTC m=+0.078452820 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:41:47 np0005464214 podman[274506]: 2025-10-01 13:41:47.538656292 +0000 UTC m=+0.086407823 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct  1 09:41:47 np0005464214 podman[274508]: 2025-10-01 13:41:47.551847923 +0000 UTC m=+0.091384062 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Oct  1 09:41:47 np0005464214 podman[274505]: 2025-10-01 13:41:47.56559249 +0000 UTC m=+0.111208403 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible)
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:41:47
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'vms']
Oct  1 09:41:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:41:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:41:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1595201962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:41:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:41:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1595201962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:41:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:41:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:41:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:41:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:42:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:42:00 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2084272842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:42:00 np0005464214 nova_compute[260022]: 2025-10-01 13:42:00.856 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.026 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.028 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.028 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.028 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.096 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.096 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.140 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:42:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:42:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901455468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.579 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.585 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.599 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.600 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:42:01 np0005464214 nova_compute[260022]: 2025-10-01 13:42:01.600 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:42:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct  1 09:42:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct  1 09:42:04 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct  1 09:42:04 np0005464214 nova_compute[260022]: 2025-10-01 13:42:04.596 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:04 np0005464214 nova_compute[260022]: 2025-10-01 13:42:04.597 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:04 np0005464214 nova_compute[260022]: 2025-10-01 13:42:04.597 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:42:04 np0005464214 nova_compute[260022]: 2025-10-01 13:42:04.597 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:42:04 np0005464214 nova_compute[260022]: 2025-10-01 13:42:04.612 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:42:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:05 np0005464214 nova_compute[260022]: 2025-10-01 13:42:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:05 np0005464214 nova_compute[260022]: 2025-10-01 13:42:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:05 np0005464214 nova_compute[260022]: 2025-10-01 13:42:05.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.321778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126321852, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2109, "num_deletes": 254, "total_data_size": 3510436, "memory_usage": 3582264, "flush_reason": "Manual Compaction"}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  1 09:42:06 np0005464214 nova_compute[260022]: 2025-10-01 13:42:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126349109, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3431667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21043, "largest_seqno": 23151, "table_properties": {"data_size": 3421973, "index_size": 6188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19312, "raw_average_key_size": 20, "raw_value_size": 3402709, "raw_average_value_size": 3563, "num_data_blocks": 279, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325907, "oldest_key_time": 1759325907, "file_creation_time": 1759326126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 28193 microseconds, and 14147 cpu microseconds.
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.349959) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3431667 bytes OK
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.350275) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.352509) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.352557) EVENT_LOG_v1 {"time_micros": 1759326126352547, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.352601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3501587, prev total WAL file size 3501587, number of live WAL files 2.
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.355694) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3351KB)], [50(7611KB)]
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126355817, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11225646, "oldest_snapshot_seqno": -1}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4785 keys, 9455610 bytes, temperature: kUnknown
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126430907, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9455610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9420384, "index_size": 22188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 117220, "raw_average_key_size": 24, "raw_value_size": 9330638, "raw_average_value_size": 1949, "num_data_blocks": 932, "num_entries": 4785, "num_filter_entries": 4785, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.431258) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9455610 bytes
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.432761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5306, records dropped: 521 output_compression: NoCompression
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.432791) EVENT_LOG_v1 {"time_micros": 1759326126432775, "job": 26, "event": "compaction_finished", "compaction_time_micros": 75185, "compaction_time_cpu_micros": 40494, "output_level": 6, "num_output_files": 1, "total_output_size": 9455610, "num_input_records": 5306, "num_output_records": 4785, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126434238, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126437200, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.355556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:42:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:42:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:07 np0005464214 nova_compute[260022]: 2025-10-01 13:42:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  1 09:42:10 np0005464214 nova_compute[260022]: 2025-10-01 13:42:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  1 09:42:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:42:12.310 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:42:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:42:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:42:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:42:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:42:12 np0005464214 nova_compute[260022]: 2025-10-01 13:42:12.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:42:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.8 MiB/s wr, 44 op/s
Oct  1 09:42:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct  1 09:42:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 MiB/s wr, 36 op/s
Oct  1 09:42:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:42:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:42:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:42:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:42:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:42:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:42:18 np0005464214 podman[274634]: 2025-10-01 13:42:18.520794692 +0000 UTC m=+0.067745769 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:42:18 np0005464214 podman[274640]: 2025-10-01 13:42:18.526175523 +0000 UTC m=+0.066712435 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:42:18 np0005464214 podman[274633]: 2025-10-01 13:42:18.54241475 +0000 UTC m=+0.087158677 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:42:18 np0005464214 podman[274632]: 2025-10-01 13:42:18.543900107 +0000 UTC m=+0.100353277 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  1 09:42:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Oct  1 09:42:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:42:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:42:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1a92c012-86e5-4015-8235-c6f6b6199707 does not exist
Oct  1 09:42:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2b564483-70fb-45e0-a3bc-c84c5115da4f does not exist
Oct  1 09:42:21 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 16d182fc-feb6-4909-8b5b-971e2f5a0473 does not exist
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:42:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:42:21 np0005464214 podman[275103]: 2025-10-01 13:42:21.956273928 +0000 UTC m=+0.083233162 container create 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:42:22 np0005464214 podman[275103]: 2025-10-01 13:42:21.906450951 +0000 UTC m=+0.033410225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:42:22 np0005464214 systemd[1]: Started libpod-conmon-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope.
Oct  1 09:42:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:42:22 np0005464214 podman[275103]: 2025-10-01 13:42:22.063862005 +0000 UTC m=+0.190821279 container init 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:42:22 np0005464214 podman[275103]: 2025-10-01 13:42:22.072284543 +0000 UTC m=+0.199243787 container start 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 09:42:22 np0005464214 podman[275103]: 2025-10-01 13:42:22.075924899 +0000 UTC m=+0.202884163 container attach 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:42:22 np0005464214 wonderful_proskuriakova[275119]: 167 167
Oct  1 09:42:22 np0005464214 systemd[1]: libpod-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope: Deactivated successfully.
Oct  1 09:42:22 np0005464214 conmon[275119]: conmon 7f3923e0f525ce828cd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope/container/memory.events
Oct  1 09:42:22 np0005464214 podman[275103]: 2025-10-01 13:42:22.081092044 +0000 UTC m=+0.208051318 container died 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:42:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8e738eebbee6880ebf78fd4f8de856201171cc03857962d9bdcf63d489a3e14b-merged.mount: Deactivated successfully.
Oct  1 09:42:22 np0005464214 podman[275103]: 2025-10-01 13:42:22.142804729 +0000 UTC m=+0.269764003 container remove 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:42:22 np0005464214 systemd[1]: libpod-conmon-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope: Deactivated successfully.
Oct  1 09:42:22 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:22 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:42:22 np0005464214 podman[275145]: 2025-10-01 13:42:22.344854115 +0000 UTC m=+0.067858013 container create 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:42:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:22 np0005464214 podman[275145]: 2025-10-01 13:42:22.308553688 +0000 UTC m=+0.031557646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:42:22 np0005464214 systemd[1]: Started libpod-conmon-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope.
Oct  1 09:42:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:42:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:22 np0005464214 podman[275145]: 2025-10-01 13:42:22.455498118 +0000 UTC m=+0.178502036 container init 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:42:22 np0005464214 podman[275145]: 2025-10-01 13:42:22.467937895 +0000 UTC m=+0.190941763 container start 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:42:22 np0005464214 podman[275145]: 2025-10-01 13:42:22.509995374 +0000 UTC m=+0.232999252 container attach 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:42:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:23 np0005464214 crazy_leakey[275162]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:42:23 np0005464214 crazy_leakey[275162]: --> relative data size: 1.0
Oct  1 09:42:23 np0005464214 crazy_leakey[275162]: --> All data devices are unavailable
Oct  1 09:42:23 np0005464214 systemd[1]: libpod-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope: Deactivated successfully.
Oct  1 09:42:23 np0005464214 systemd[1]: libpod-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope: Consumed 1.083s CPU time.
Oct  1 09:42:23 np0005464214 podman[275145]: 2025-10-01 13:42:23.604287426 +0000 UTC m=+1.327291324 container died 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:42:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300-merged.mount: Deactivated successfully.
Oct  1 09:42:23 np0005464214 podman[275145]: 2025-10-01 13:42:23.824963434 +0000 UTC m=+1.547967322 container remove 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:42:23 np0005464214 systemd[1]: libpod-conmon-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope: Deactivated successfully.
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.620009976 +0000 UTC m=+0.053844696 container create a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:42:24 np0005464214 systemd[1]: Started libpod-conmon-a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272.scope.
Oct  1 09:42:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.598060487 +0000 UTC m=+0.031895197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.712289735 +0000 UTC m=+0.146124495 container init a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.724401001 +0000 UTC m=+0.158235721 container start a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.729457652 +0000 UTC m=+0.163292382 container attach a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:42:24 np0005464214 youthful_ardinghelli[275364]: 167 167
Oct  1 09:42:24 np0005464214 systemd[1]: libpod-a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272.scope: Deactivated successfully.
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.731005401 +0000 UTC m=+0.164840121 container died a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:42:24 np0005464214 systemd[1]: var-lib-containers-storage-overlay-21f80649092122b77b656f0f7e7b5bb0537c15af8227f7ff8eaf88fe2f99db95-merged.mount: Deactivated successfully.
Oct  1 09:42:24 np0005464214 podman[275348]: 2025-10-01 13:42:24.776425128 +0000 UTC m=+0.210259808 container remove a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:42:24 np0005464214 systemd[1]: libpod-conmon-a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272.scope: Deactivated successfully.
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:25.015240554 +0000 UTC m=+0.072786079 container create d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:42:25 np0005464214 systemd[1]: Started libpod-conmon-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope.
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:24.985883378 +0000 UTC m=+0.043428953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:42:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:42:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:25.13162501 +0000 UTC m=+0.189170545 container init d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:25.14324018 +0000 UTC m=+0.200785735 container start d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:25.148403675 +0000 UTC m=+0.205949210 container attach d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:42:25 np0005464214 sharp_cray[275403]: {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:    "0": [
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:        {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "devices": [
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "/dev/loop3"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            ],
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_name": "ceph_lv0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_size": "21470642176",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "name": "ceph_lv0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "tags": {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cluster_name": "ceph",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.crush_device_class": "",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.encrypted": "0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osd_id": "0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.type": "block",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.vdo": "0"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            },
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "type": "block",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "vg_name": "ceph_vg0"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:        }
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:    ],
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:    "1": [
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:        {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "devices": [
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "/dev/loop4"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            ],
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_name": "ceph_lv1",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_size": "21470642176",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "name": "ceph_lv1",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "tags": {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cluster_name": "ceph",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.crush_device_class": "",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.encrypted": "0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osd_id": "1",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.type": "block",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.vdo": "0"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            },
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "type": "block",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "vg_name": "ceph_vg1"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:        }
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:    ],
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:    "2": [
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:        {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "devices": [
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "/dev/loop5"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            ],
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_name": "ceph_lv2",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_size": "21470642176",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "name": "ceph_lv2",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "tags": {
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.cluster_name": "ceph",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.crush_device_class": "",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.encrypted": "0",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osd_id": "2",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.type": "block",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:                "ceph.vdo": "0"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            },
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "type": "block",
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:            "vg_name": "ceph_vg2"
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:        }
Oct  1 09:42:25 np0005464214 sharp_cray[275403]:    ]
Oct  1 09:42:25 np0005464214 sharp_cray[275403]: }
Oct  1 09:42:25 np0005464214 systemd[1]: libpod-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope: Deactivated successfully.
Oct  1 09:42:25 np0005464214 conmon[275403]: conmon d4dc2cbdcbaeec3557a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope/container/memory.events
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:25.88924219 +0000 UTC m=+0.946787715 container died d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:42:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86-merged.mount: Deactivated successfully.
Oct  1 09:42:25 np0005464214 podman[275387]: 2025-10-01 13:42:25.969274289 +0000 UTC m=+1.026819804 container remove d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:42:25 np0005464214 systemd[1]: libpod-conmon-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope: Deactivated successfully.
Oct  1 09:42:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:26 np0005464214 podman[275565]: 2025-10-01 13:42:26.785319289 +0000 UTC m=+0.049883010 container create 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:42:26 np0005464214 systemd[1]: Started libpod-conmon-1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507.scope.
Oct  1 09:42:26 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:42:26 np0005464214 podman[275565]: 2025-10-01 13:42:26.764858788 +0000 UTC m=+0.029422529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:42:26 np0005464214 podman[275565]: 2025-10-01 13:42:26.876755881 +0000 UTC m=+0.141319592 container init 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:42:26 np0005464214 podman[275565]: 2025-10-01 13:42:26.885710537 +0000 UTC m=+0.150274228 container start 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:42:26 np0005464214 podman[275565]: 2025-10-01 13:42:26.888902198 +0000 UTC m=+0.153465909 container attach 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:42:26 np0005464214 reverent_volhard[275582]: 167 167
Oct  1 09:42:26 np0005464214 systemd[1]: libpod-1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507.scope: Deactivated successfully.
Oct  1 09:42:26 np0005464214 podman[275565]: 2025-10-01 13:42:26.891930564 +0000 UTC m=+0.156494255 container died 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:42:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-eec971cfc5d2df23cd8ab9f1e11a585f7f9af8a015cad5ee936e392497c9ae13-merged.mount: Deactivated successfully.
Oct  1 09:42:27 np0005464214 podman[275565]: 2025-10-01 13:42:27.080999806 +0000 UTC m=+0.345563497 container remove 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 09:42:27 np0005464214 systemd[1]: libpod-conmon-1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507.scope: Deactivated successfully.
Oct  1 09:42:27 np0005464214 podman[275606]: 2025-10-01 13:42:27.365690963 +0000 UTC m=+0.078094618 container create b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:42:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:27 np0005464214 podman[275606]: 2025-10-01 13:42:27.316264789 +0000 UTC m=+0.028668544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:42:27 np0005464214 systemd[1]: Started libpod-conmon-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope.
Oct  1 09:42:27 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:42:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:27 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:42:27 np0005464214 podman[275606]: 2025-10-01 13:42:27.636434677 +0000 UTC m=+0.348838402 container init b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:42:27 np0005464214 podman[275606]: 2025-10-01 13:42:27.648255482 +0000 UTC m=+0.360659177 container start b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:42:27 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:42:27.672 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:42:27 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:42:27.675 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:42:27 np0005464214 podman[275606]: 2025-10-01 13:42:27.716772495 +0000 UTC m=+0.429176190 container attach b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:42:28 np0005464214 festive_neumann[275622]: {
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "osd_id": 0,
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "type": "bluestore"
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:    },
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "osd_id": 2,
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "type": "bluestore"
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:    },
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "osd_id": 1,
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:        "type": "bluestore"
Oct  1 09:42:28 np0005464214 festive_neumann[275622]:    }
Oct  1 09:42:28 np0005464214 festive_neumann[275622]: }
Oct  1 09:42:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:28 np0005464214 systemd[1]: libpod-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope: Deactivated successfully.
Oct  1 09:42:28 np0005464214 podman[275606]: 2025-10-01 13:42:28.68334144 +0000 UTC m=+1.395745125 container died b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  1 09:42:28 np0005464214 systemd[1]: libpod-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope: Consumed 1.044s CPU time.
Oct  1 09:42:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2-merged.mount: Deactivated successfully.
Oct  1 09:42:28 np0005464214 podman[275606]: 2025-10-01 13:42:28.758172073 +0000 UTC m=+1.470575738 container remove b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:42:28 np0005464214 systemd[1]: libpod-conmon-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope: Deactivated successfully.
Oct  1 09:42:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:42:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:42:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:28 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c4c39d92-9e3b-438e-8e18-baeaf5d2bcc4 does not exist
Oct  1 09:42:28 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e1dfacae-a324-4a9a-b982-7b0d8dff391d does not exist
Oct  1 09:42:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:29 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:42:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:34 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:42:34.677 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:42:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:42:47
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images']
Oct  1 09:42:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:42:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:49 np0005464214 podman[275721]: 2025-10-01 13:42:49.552296127 +0000 UTC m=+0.085057036 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:42:49 np0005464214 podman[275720]: 2025-10-01 13:42:49.562999345 +0000 UTC m=+0.095285379 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct  1 09:42:49 np0005464214 podman[275719]: 2025-10-01 13:42:49.573520588 +0000 UTC m=+0.112904006 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:42:49 np0005464214 podman[275718]: 2025-10-01 13:42:49.585874337 +0000 UTC m=+0.124691647 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Oct  1 09:42:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:42:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722530960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:42:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:42:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722530960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:42:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:42:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:42:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:42:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.412 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.413 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.413 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.413 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.414 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:43:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:43:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285082951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:43:01 np0005464214 nova_compute[260022]: 2025-10-01 13:43:01.838 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.012 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.013 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.014 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.014 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.080 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.080 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.097 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:43:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:43:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282990422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.537 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.543 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.559 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.561 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:43:02 np0005464214 nova_compute[260022]: 2025-10-01 13:43:02.562 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:43:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:03 np0005464214 nova_compute[260022]: 2025-10-01 13:43:03.564 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:03 np0005464214 nova_compute[260022]: 2025-10-01 13:43:03.564 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:05 np0005464214 nova_compute[260022]: 2025-10-01 13:43:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:05 np0005464214 nova_compute[260022]: 2025-10-01 13:43:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:43:05 np0005464214 nova_compute[260022]: 2025-10-01 13:43:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:43:05 np0005464214 nova_compute[260022]: 2025-10-01 13:43:05.371 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:43:05 np0005464214 nova_compute[260022]: 2025-10-01 13:43:05.371 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:05 np0005464214 nova_compute[260022]: 2025-10-01 13:43:05.371 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:43:06 np0005464214 nova_compute[260022]: 2025-10-01 13:43:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:08 np0005464214 nova_compute[260022]: 2025-10-01 13:43:08.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:08 np0005464214 nova_compute[260022]: 2025-10-01 13:43:08.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:10 np0005464214 nova_compute[260022]: 2025-10-01 13:43:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:43:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:43:12.312 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:43:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:43:12.312 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:43:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:43:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:43:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:43:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:43:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:43:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:43:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:43:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:43:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:20 np0005464214 podman[275848]: 2025-10-01 13:43:20.564863461 +0000 UTC m=+0.106472282 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:43:20 np0005464214 podman[275850]: 2025-10-01 13:43:20.571573122 +0000 UTC m=+0.095790635 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:43:20 np0005464214 podman[275847]: 2025-10-01 13:43:20.59938318 +0000 UTC m=+0.141393504 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250923)
Oct  1 09:43:20 np0005464214 podman[275849]: 2025-10-01 13:43:20.603678086 +0000 UTC m=+0.135692415 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:43:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:28 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:43:28.583 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:43:28 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:43:28.584 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:43:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:30 np0005464214 podman[276095]: 2025-10-01 13:43:30.12943519 +0000 UTC m=+0.095624130 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:43:30 np0005464214 podman[276095]: 2025-10-01 13:43:30.281262914 +0000 UTC m=+0.247451834 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:43:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:43:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:43:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6b59a1bb-d282-4090-bc7f-a0f9767a3e0b does not exist
Oct  1 09:43:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a1554867-aed6-4e21-9996-452691b1edf8 does not exist
Oct  1 09:43:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b181535c-6ad4-4503-b9a7-42f1e7a56aa8 does not exist
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:43:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.756216263 +0000 UTC m=+0.068198164 container create 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  1 09:43:32 np0005464214 systemd[1]: Started libpod-conmon-1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e.scope.
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.726935448 +0000 UTC m=+0.038917389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:43:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.848407214 +0000 UTC m=+0.160389075 container init 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.857304665 +0000 UTC m=+0.169286526 container start 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.861758625 +0000 UTC m=+0.173740516 container attach 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:43:32 np0005464214 agitated_bartik[276539]: 167 167
Oct  1 09:43:32 np0005464214 systemd[1]: libpod-1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e.scope: Deactivated successfully.
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.865017998 +0000 UTC m=+0.176999859 container died 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:43:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d57c03452c8cee17eeef88b9cd7124fe2accea570983335401ce0c1cb064ea45-merged.mount: Deactivated successfully.
Oct  1 09:43:32 np0005464214 podman[276523]: 2025-10-01 13:43:32.925016963 +0000 UTC m=+0.236998844 container remove 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:43:32 np0005464214 systemd[1]: libpod-conmon-1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e.scope: Deactivated successfully.
Oct  1 09:43:33 np0005464214 podman[276564]: 2025-10-01 13:43:33.203545386 +0000 UTC m=+0.097010014 container create 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 09:43:33 np0005464214 podman[276564]: 2025-10-01 13:43:33.149154499 +0000 UTC m=+0.042619117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:43:33 np0005464214 systemd[1]: Started libpod-conmon-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope.
Oct  1 09:43:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:43:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:33 np0005464214 podman[276564]: 2025-10-01 13:43:33.544900363 +0000 UTC m=+0.438365031 container init 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:43:33 np0005464214 podman[276564]: 2025-10-01 13:43:33.553513955 +0000 UTC m=+0.446978583 container start 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:43:33 np0005464214 podman[276564]: 2025-10-01 13:43:33.65915865 +0000 UTC m=+0.552623278 container attach 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:43:34 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:43:34.588 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:43:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:34 np0005464214 romantic_fermi[276580]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:43:34 np0005464214 romantic_fermi[276580]: --> relative data size: 1.0
Oct  1 09:43:34 np0005464214 romantic_fermi[276580]: --> All data devices are unavailable
Oct  1 09:43:34 np0005464214 systemd[1]: libpod-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope: Deactivated successfully.
Oct  1 09:43:34 np0005464214 systemd[1]: libpod-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope: Consumed 1.198s CPU time.
Oct  1 09:43:34 np0005464214 podman[276564]: 2025-10-01 13:43:34.805370788 +0000 UTC m=+1.698835426 container died 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:43:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430-merged.mount: Deactivated successfully.
Oct  1 09:43:35 np0005464214 podman[276564]: 2025-10-01 13:43:35.043093134 +0000 UTC m=+1.936557772 container remove 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:43:35 np0005464214 systemd[1]: libpod-conmon-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope: Deactivated successfully.
Oct  1 09:43:35 np0005464214 podman[276763]: 2025-10-01 13:43:35.960359323 +0000 UTC m=+0.113789183 container create b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:43:35 np0005464214 podman[276763]: 2025-10-01 13:43:35.882061321 +0000 UTC m=+0.035491231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:43:36 np0005464214 systemd[1]: Started libpod-conmon-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope.
Oct  1 09:43:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:43:36 np0005464214 podman[276763]: 2025-10-01 13:43:36.126317763 +0000 UTC m=+0.279747643 container init b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 09:43:36 np0005464214 podman[276763]: 2025-10-01 13:43:36.140966036 +0000 UTC m=+0.294395886 container start b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:43:36 np0005464214 busy_keldysh[276779]: 167 167
Oct  1 09:43:36 np0005464214 systemd[1]: libpod-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope: Deactivated successfully.
Oct  1 09:43:36 np0005464214 conmon[276779]: conmon b0d8ff1c0dec03bdc184 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope/container/memory.events
Oct  1 09:43:36 np0005464214 podman[276763]: 2025-10-01 13:43:36.179388438 +0000 UTC m=+0.332818268 container attach b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:43:36 np0005464214 podman[276763]: 2025-10-01 13:43:36.181133804 +0000 UTC m=+0.334563634 container died b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:43:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b35d88e6ec84a54695a0a6565f806f7222c1a14b79706473e25f512b89d44d67-merged.mount: Deactivated successfully.
Oct  1 09:43:36 np0005464214 podman[276763]: 2025-10-01 13:43:36.263717611 +0000 UTC m=+0.417147431 container remove b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:43:36 np0005464214 systemd[1]: libpod-conmon-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope: Deactivated successfully.
Oct  1 09:43:36 np0005464214 podman[276807]: 2025-10-01 13:43:36.467320009 +0000 UTC m=+0.047804570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:43:36 np0005464214 podman[276807]: 2025-10-01 13:43:36.662688398 +0000 UTC m=+0.243172879 container create 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:43:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:36 np0005464214 systemd[1]: Started libpod-conmon-5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d.scope.
Oct  1 09:43:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:43:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:36 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:37 np0005464214 podman[276807]: 2025-10-01 13:43:37.117487566 +0000 UTC m=+0.697972087 container init 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:43:37 np0005464214 podman[276807]: 2025-10-01 13:43:37.132656595 +0000 UTC m=+0.713141116 container start 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:43:37 np0005464214 podman[276807]: 2025-10-01 13:43:37.139636115 +0000 UTC m=+0.720120616 container attach 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:43:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]: {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:    "0": [
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:        {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "devices": [
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "/dev/loop3"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            ],
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_name": "ceph_lv0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_size": "21470642176",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "name": "ceph_lv0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "tags": {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cluster_name": "ceph",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.crush_device_class": "",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.encrypted": "0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osd_id": "0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.type": "block",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.vdo": "0"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            },
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "type": "block",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "vg_name": "ceph_vg0"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:        }
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:    ],
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:    "1": [
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:        {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "devices": [
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "/dev/loop4"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            ],
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_name": "ceph_lv1",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_size": "21470642176",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "name": "ceph_lv1",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "tags": {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cluster_name": "ceph",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.crush_device_class": "",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.encrypted": "0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osd_id": "1",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.type": "block",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.vdo": "0"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            },
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "type": "block",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "vg_name": "ceph_vg1"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:        }
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:    ],
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:    "2": [
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:        {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "devices": [
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "/dev/loop5"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            ],
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_name": "ceph_lv2",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_size": "21470642176",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "name": "ceph_lv2",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "tags": {
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.cluster_name": "ceph",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.crush_device_class": "",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.encrypted": "0",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osd_id": "2",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.type": "block",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:                "ceph.vdo": "0"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            },
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "type": "block",
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:            "vg_name": "ceph_vg2"
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:        }
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]:    ]
Oct  1 09:43:37 np0005464214 gallant_sutherland[276824]: }
Oct  1 09:43:37 np0005464214 systemd[1]: libpod-5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d.scope: Deactivated successfully.
Oct  1 09:43:37 np0005464214 podman[276807]: 2025-10-01 13:43:37.939822068 +0000 UTC m=+1.520306549 container died 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:43:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57-merged.mount: Deactivated successfully.
Oct  1 09:43:37 np0005464214 podman[276807]: 2025-10-01 13:43:37.99941867 +0000 UTC m=+1.579903191 container remove 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 09:43:38 np0005464214 systemd[1]: libpod-conmon-5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d.scope: Deactivated successfully.
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.702642902 +0000 UTC m=+0.056417392 container create 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:43:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:38 np0005464214 systemd[1]: Started libpod-conmon-796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d.scope.
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.678049306 +0000 UTC m=+0.031823876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:43:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.806343416 +0000 UTC m=+0.160117996 container init 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.818397447 +0000 UTC m=+0.172171967 container start 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.822901999 +0000 UTC m=+0.176676569 container attach 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:43:38 np0005464214 amazing_robinson[277007]: 167 167
Oct  1 09:43:38 np0005464214 systemd[1]: libpod-796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d.scope: Deactivated successfully.
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.826932466 +0000 UTC m=+0.180706986 container died 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:43:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d2de934d21f9f43e5ee59ae1ddb7bddf003de21350685fb01dab15b83e9afa81-merged.mount: Deactivated successfully.
Oct  1 09:43:38 np0005464214 podman[276990]: 2025-10-01 13:43:38.877768961 +0000 UTC m=+0.231543481 container remove 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:43:38 np0005464214 systemd[1]: libpod-conmon-796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d.scope: Deactivated successfully.
Oct  1 09:43:39 np0005464214 podman[277030]: 2025-10-01 13:43:39.142906253 +0000 UTC m=+0.059750838 container create 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:43:39 np0005464214 systemd[1]: Started libpod-conmon-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope.
Oct  1 09:43:39 np0005464214 podman[277030]: 2025-10-01 13:43:39.124096928 +0000 UTC m=+0.040941533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:43:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:43:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:43:39 np0005464214 podman[277030]: 2025-10-01 13:43:39.251410818 +0000 UTC m=+0.168255413 container init 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:43:39 np0005464214 podman[277030]: 2025-10-01 13:43:39.266880666 +0000 UTC m=+0.183725271 container start 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:43:39 np0005464214 podman[277030]: 2025-10-01 13:43:39.271227834 +0000 UTC m=+0.188072459 container attach 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]: {
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "osd_id": 0,
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "type": "bluestore"
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:    },
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "osd_id": 2,
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "type": "bluestore"
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:    },
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "osd_id": 1,
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:        "type": "bluestore"
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]:    }
Oct  1 09:43:40 np0005464214 brave_mirzakhani[277046]: }
Oct  1 09:43:40 np0005464214 systemd[1]: libpod-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope: Deactivated successfully.
Oct  1 09:43:40 np0005464214 systemd[1]: libpod-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope: Consumed 1.060s CPU time.
Oct  1 09:43:40 np0005464214 conmon[277046]: conmon 91a993d7110f94e15fd9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope/container/memory.events
Oct  1 09:43:40 np0005464214 podman[277030]: 2025-10-01 13:43:40.320821511 +0000 UTC m=+1.237666096 container died 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:43:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff-merged.mount: Deactivated successfully.
Oct  1 09:43:40 np0005464214 podman[277030]: 2025-10-01 13:43:40.371494751 +0000 UTC m=+1.288339336 container remove 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:43:40 np0005464214 systemd[1]: libpod-conmon-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope: Deactivated successfully.
Oct  1 09:43:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:43:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:43:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ced96db3-49b1-4947-8a68-510fc26cc28e does not exist
Oct  1 09:43:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8130182d-368d-4663-be9e-0882f71d6f89 does not exist
Oct  1 09:43:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:43:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:43:47
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Oct  1 09:43:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:43:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:51 np0005464214 podman[277144]: 2025-10-01 13:43:51.524657986 +0000 UTC m=+0.071344824 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:43:51 np0005464214 podman[277145]: 2025-10-01 13:43:51.529641503 +0000 UTC m=+0.071948452 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 09:43:51 np0005464214 podman[277149]: 2025-10-01 13:43:51.542456009 +0000 UTC m=+0.072368727 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 09:43:51 np0005464214 podman[277143]: 2025-10-01 13:43:51.562885743 +0000 UTC m=+0.112179673 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 09:43:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:43:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603793647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:43:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:43:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603793647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:43:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:43:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:43:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:43:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.384 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:44:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:44:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607660270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:44:01 np0005464214 nova_compute[260022]: 2025-10-01 13:44:01.887 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.067 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.069 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.069 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.069 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.157 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.157 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.182 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.405604) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242405809, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1440, "num_deletes": 505, "total_data_size": 1847092, "memory_usage": 1885464, "flush_reason": "Manual Compaction"}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242422527, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1573272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23152, "largest_seqno": 24591, "table_properties": {"data_size": 1567313, "index_size": 2715, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16088, "raw_average_key_size": 18, "raw_value_size": 1553226, "raw_average_value_size": 1833, "num_data_blocks": 123, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326127, "oldest_key_time": 1759326127, "file_creation_time": 1759326242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17153 microseconds, and 8923 cpu microseconds.
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.422792) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1573272 bytes OK
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.422941) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.424793) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.424824) EVENT_LOG_v1 {"time_micros": 1759326242424813, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.424856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1839627, prev total WAL file size 1839627, number of live WAL files 2.
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.426811) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353038' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1536KB)], [53(9233KB)]
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242426879, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11028882, "oldest_snapshot_seqno": -1}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4626 keys, 7867304 bytes, temperature: kUnknown
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242487355, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7867304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7835430, "index_size": 19220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115497, "raw_average_key_size": 24, "raw_value_size": 7750769, "raw_average_value_size": 1675, "num_data_blocks": 800, "num_entries": 4626, "num_filter_entries": 4626, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.487713) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7867304 bytes
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.489171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 129.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(12.0) write-amplify(5.0) OK, records in: 5632, records dropped: 1006 output_compression: NoCompression
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.489193) EVENT_LOG_v1 {"time_micros": 1759326242489182, "job": 28, "event": "compaction_finished", "compaction_time_micros": 60601, "compaction_time_cpu_micros": 38785, "output_level": 6, "num_output_files": 1, "total_output_size": 7867304, "num_input_records": 5632, "num_output_records": 4626, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242489629, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242491760, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.426651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:44:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180216587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.610 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.620 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.642 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.644 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:44:02 np0005464214 nova_compute[260022]: 2025-10-01 13:44:02.645 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:44:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:03 np0005464214 nova_compute[260022]: 2025-10-01 13:44:03.645 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:04 np0005464214 nova_compute[260022]: 2025-10-01 13:44:04.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:05 np0005464214 nova_compute[260022]: 2025-10-01 13:44:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:05 np0005464214 nova_compute[260022]: 2025-10-01 13:44:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:44:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:07 np0005464214 nova_compute[260022]: 2025-10-01 13:44:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:07 np0005464214 nova_compute[260022]: 2025-10-01 13:44:07.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:44:07 np0005464214 nova_compute[260022]: 2025-10-01 13:44:07.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:44:07 np0005464214 nova_compute[260022]: 2025-10-01 13:44:07.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:44:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:08 np0005464214 nova_compute[260022]: 2025-10-01 13:44:08.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:10 np0005464214 nova_compute[260022]: 2025-10-01 13:44:10.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:10 np0005464214 nova_compute[260022]: 2025-10-01 13:44:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:44:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:44:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:44:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:44:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:44:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:44:12 np0005464214 nova_compute[260022]: 2025-10-01 13:44:12.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:15 np0005464214 nova_compute[260022]: 2025-10-01 13:44:15.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:44:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:44:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:44:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:44:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:44:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:44:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:44:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:22 np0005464214 podman[277266]: 2025-10-01 13:44:22.53794701 +0000 UTC m=+0.078629084 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:44:22 np0005464214 podman[277268]: 2025-10-01 13:44:22.549933349 +0000 UTC m=+0.074569226 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:44:22 np0005464214 podman[277267]: 2025-10-01 13:44:22.568646709 +0000 UTC m=+0.093587946 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 09:44:22 np0005464214 podman[277265]: 2025-10-01 13:44:22.582747394 +0000 UTC m=+0.121216428 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2)
Oct  1 09:44:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:44:29.533 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:44:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:44:29.535 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:44:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct  1 09:44:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct  1 09:44:37 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct  1 09:44:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 818 B/s wr, 7 op/s
Oct  1 09:44:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:44:39.536 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:44:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct  1 09:44:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct  1 09:44:39 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct  1 09:44:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 1023 B/s wr, 8 op/s
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:44:41 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 55f765c8-4254-4bc1-91d9-6991e283102d does not exist
Oct  1 09:44:41 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 23e60e8a-5395-4cdf-9b94-cc2137eed963 does not exist
Oct  1 09:44:41 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 83120ba7-da1d-48da-9a45-7feb5291e50f does not exist
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:44:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:44:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:44:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:44:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.276350726 +0000 UTC m=+0.045783307 container create 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:44:42 np0005464214 systemd[1]: Started libpod-conmon-7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558.scope.
Oct  1 09:44:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.257100898 +0000 UTC m=+0.026533469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.354784642 +0000 UTC m=+0.124217253 container init 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.367754342 +0000 UTC m=+0.137186913 container start 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.370948883 +0000 UTC m=+0.140381464 container attach 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:44:42 np0005464214 objective_faraday[277637]: 167 167
Oct  1 09:44:42 np0005464214 systemd[1]: libpod-7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558.scope: Deactivated successfully.
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.374217056 +0000 UTC m=+0.143649627 container died 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:44:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2cc30307af531efa076ca830181a18aba6633ed126071a919a7cff087a979e42-merged.mount: Deactivated successfully.
Oct  1 09:44:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:42 np0005464214 podman[277621]: 2025-10-01 13:44:42.421068055 +0000 UTC m=+0.190500656 container remove 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:44:42 np0005464214 systemd[1]: libpod-conmon-7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558.scope: Deactivated successfully.
Oct  1 09:44:42 np0005464214 podman[277661]: 2025-10-01 13:44:42.61221309 +0000 UTC m=+0.069321660 container create ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  1 09:44:42 np0005464214 systemd[1]: Started libpod-conmon-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope.
Oct  1 09:44:42 np0005464214 podman[277661]: 2025-10-01 13:44:42.584124693 +0000 UTC m=+0.041233303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:44:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:44:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:42 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:42 np0005464214 podman[277661]: 2025-10-01 13:44:42.713232438 +0000 UTC m=+0.170340998 container init ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:44:42 np0005464214 podman[277661]: 2025-10-01 13:44:42.725969071 +0000 UTC m=+0.183077641 container start ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:44:42 np0005464214 podman[277661]: 2025-10-01 13:44:42.73036821 +0000 UTC m=+0.187476840 container attach ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:44:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Oct  1 09:44:43 np0005464214 confident_lamport[277678]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:44:43 np0005464214 confident_lamport[277678]: --> relative data size: 1.0
Oct  1 09:44:43 np0005464214 confident_lamport[277678]: --> All data devices are unavailable
Oct  1 09:44:43 np0005464214 systemd[1]: libpod-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope: Deactivated successfully.
Oct  1 09:44:43 np0005464214 systemd[1]: libpod-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope: Consumed 1.136s CPU time.
Oct  1 09:44:43 np0005464214 podman[277661]: 2025-10-01 13:44:43.906458801 +0000 UTC m=+1.363567361 container died ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 09:44:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259-merged.mount: Deactivated successfully.
Oct  1 09:44:43 np0005464214 podman[277661]: 2025-10-01 13:44:43.988067168 +0000 UTC m=+1.445175728 container remove ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:44:43 np0005464214 systemd[1]: libpod-conmon-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope: Deactivated successfully.
Oct  1 09:44:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Oct  1 09:44:44 np0005464214 podman[277859]: 2025-10-01 13:44:44.848530525 +0000 UTC m=+0.051850179 container create 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:44:44 np0005464214 systemd[1]: Started libpod-conmon-6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22.scope.
Oct  1 09:44:44 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:44:44 np0005464214 podman[277859]: 2025-10-01 13:44:44.82526688 +0000 UTC m=+0.028586634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:44:44 np0005464214 podman[277859]: 2025-10-01 13:44:44.937320037 +0000 UTC m=+0.140639731 container init 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 09:44:44 np0005464214 podman[277859]: 2025-10-01 13:44:44.948919834 +0000 UTC m=+0.152239508 container start 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 09:44:44 np0005464214 podman[277859]: 2025-10-01 13:44:44.952822346 +0000 UTC m=+0.156142070 container attach 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:44:44 np0005464214 adoring_carson[277875]: 167 167
Oct  1 09:44:44 np0005464214 systemd[1]: libpod-6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22.scope: Deactivated successfully.
Oct  1 09:44:44 np0005464214 podman[277859]: 2025-10-01 13:44:44.95703223 +0000 UTC m=+0.160351914 container died 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:44:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b6833f2d423a0ea77bc61744c224bde2a67191152b708bd7ac737de4b6574d4a-merged.mount: Deactivated successfully.
Oct  1 09:44:45 np0005464214 podman[277859]: 2025-10-01 13:44:45.005462409 +0000 UTC m=+0.208782093 container remove 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  1 09:44:45 np0005464214 systemd[1]: libpod-conmon-6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22.scope: Deactivated successfully.
Oct  1 09:44:45 np0005464214 podman[277899]: 2025-10-01 13:44:45.300290307 +0000 UTC m=+0.070702483 container create f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:44:45 np0005464214 systemd[1]: Started libpod-conmon-f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb.scope.
Oct  1 09:44:45 np0005464214 podman[277899]: 2025-10-01 13:44:45.272147778 +0000 UTC m=+0.042560004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:44:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:44:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:45 np0005464214 podman[277899]: 2025-10-01 13:44:45.423003752 +0000 UTC m=+0.193415978 container init f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:44:45 np0005464214 podman[277899]: 2025-10-01 13:44:45.43971867 +0000 UTC m=+0.210130846 container start f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:44:45 np0005464214 podman[277899]: 2025-10-01 13:44:45.443605722 +0000 UTC m=+0.214017908 container attach f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 09:44:46 np0005464214 nice_carver[277915]: {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:    "0": [
Oct  1 09:44:46 np0005464214 nice_carver[277915]:        {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "devices": [
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "/dev/loop3"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            ],
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_name": "ceph_lv0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_size": "21470642176",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "name": "ceph_lv0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "tags": {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cluster_name": "ceph",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.crush_device_class": "",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.encrypted": "0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osd_id": "0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.type": "block",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.vdo": "0"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            },
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "type": "block",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "vg_name": "ceph_vg0"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:        }
Oct  1 09:44:46 np0005464214 nice_carver[277915]:    ],
Oct  1 09:44:46 np0005464214 nice_carver[277915]:    "1": [
Oct  1 09:44:46 np0005464214 nice_carver[277915]:        {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "devices": [
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "/dev/loop4"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            ],
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_name": "ceph_lv1",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_size": "21470642176",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "name": "ceph_lv1",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "tags": {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cluster_name": "ceph",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.crush_device_class": "",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.encrypted": "0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osd_id": "1",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.type": "block",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.vdo": "0"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            },
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "type": "block",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "vg_name": "ceph_vg1"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:        }
Oct  1 09:44:46 np0005464214 nice_carver[277915]:    ],
Oct  1 09:44:46 np0005464214 nice_carver[277915]:    "2": [
Oct  1 09:44:46 np0005464214 nice_carver[277915]:        {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "devices": [
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "/dev/loop5"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            ],
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_name": "ceph_lv2",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_size": "21470642176",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "name": "ceph_lv2",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "tags": {
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.cluster_name": "ceph",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.crush_device_class": "",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.encrypted": "0",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osd_id": "2",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.type": "block",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:                "ceph.vdo": "0"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            },
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "type": "block",
Oct  1 09:44:46 np0005464214 nice_carver[277915]:            "vg_name": "ceph_vg2"
Oct  1 09:44:46 np0005464214 nice_carver[277915]:        }
Oct  1 09:44:46 np0005464214 nice_carver[277915]:    ]
Oct  1 09:44:46 np0005464214 nice_carver[277915]: }
Oct  1 09:44:46 np0005464214 systemd[1]: libpod-f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb.scope: Deactivated successfully.
Oct  1 09:44:46 np0005464214 podman[277899]: 2025-10-01 13:44:46.233465679 +0000 UTC m=+1.003877835 container died f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:44:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1-merged.mount: Deactivated successfully.
Oct  1 09:44:46 np0005464214 podman[277899]: 2025-10-01 13:44:46.310352227 +0000 UTC m=+1.080764403 container remove f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:44:46 np0005464214 systemd[1]: libpod-conmon-f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb.scope: Deactivated successfully.
Oct  1 09:44:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 37 op/s
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.247518025 +0000 UTC m=+0.072008034 container create 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:44:47 np0005464214 systemd[1]: Started libpod-conmon-0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa.scope.
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.214914375 +0000 UTC m=+0.039404454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:44:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.332612271 +0000 UTC m=+0.157102270 container init 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.33891967 +0000 UTC m=+0.163409669 container start 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.343157395 +0000 UTC m=+0.167647434 container attach 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:44:47 np0005464214 compassionate_varahamihira[278092]: 167 167
Oct  1 09:44:47 np0005464214 systemd[1]: libpod-0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa.scope: Deactivated successfully.
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.346149199 +0000 UTC m=+0.170639198 container died 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  1 09:44:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5a29d1f5cc33bbbcdb99aee0fc63d2a62bc4762cd829ad8cd526af079a55f304-merged.mount: Deactivated successfully.
Oct  1 09:44:47 np0005464214 podman[278075]: 2025-10-01 13:44:47.38579323 +0000 UTC m=+0.210283229 container remove 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:44:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct  1 09:44:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct  1 09:44:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct  1 09:44:47 np0005464214 systemd[1]: libpod-conmon-0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa.scope: Deactivated successfully.
Oct  1 09:44:47 np0005464214 podman[278116]: 2025-10-01 13:44:47.579839907 +0000 UTC m=+0.050547507 container create f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:44:47 np0005464214 systemd[1]: Started libpod-conmon-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope.
Oct  1 09:44:47 np0005464214 podman[278116]: 2025-10-01 13:44:47.55839188 +0000 UTC m=+0.029099470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:44:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:44:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:44:47 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 09:44:47 np0005464214 podman[278116]: 2025-10-01 13:44:47.688561909 +0000 UTC m=+0.159269519 container init f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:44:47 np0005464214 podman[278116]: 2025-10-01 13:44:47.707284131 +0000 UTC m=+0.177991731 container start f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:44:47 np0005464214 podman[278116]: 2025-10-01 13:44:47.711896326 +0000 UTC m=+0.182603936 container attach f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:44:47
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control']
Oct  1 09:44:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]: {
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "osd_id": 0,
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "type": "bluestore"
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:    },
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "osd_id": 2,
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "type": "bluestore"
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:    },
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "osd_id": 1,
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:        "type": "bluestore"
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]:    }
Oct  1 09:44:48 np0005464214 determined_pasteur[278133]: }
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Oct  1 09:44:48 np0005464214 systemd[1]: libpod-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope: Deactivated successfully.
Oct  1 09:44:48 np0005464214 systemd[1]: libpod-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope: Consumed 1.060s CPU time.
Oct  1 09:44:48 np0005464214 podman[278116]: 2025-10-01 13:44:48.753908494 +0000 UTC m=+1.224616084 container died f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:44:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b-merged.mount: Deactivated successfully.
Oct  1 09:44:48 np0005464214 podman[278116]: 2025-10-01 13:44:48.820327271 +0000 UTC m=+1.291034851 container remove f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:44:48 np0005464214 systemd[1]: libpod-conmon-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope: Deactivated successfully.
Oct  1 09:44:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:44:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:44:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:44:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bd5b40e9-4de8-48f9-a0a9-b7a54c23bd78 does not exist
Oct  1 09:44:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ad4c826-f52a-4a4e-90f5-8793d1f2a8f3 does not exist
Oct  1 09:44:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:44:49 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:44:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Oct  1 09:44:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:53 np0005464214 podman[278231]: 2025-10-01 13:44:53.556884123 +0000 UTC m=+0.095367592 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 09:44:53 np0005464214 podman[278233]: 2025-10-01 13:44:53.582770451 +0000 UTC m=+0.108382803 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:44:53 np0005464214 podman[278232]: 2025-10-01 13:44:53.593180369 +0000 UTC m=+0.124420639 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct  1 09:44:53 np0005464214 podman[278230]: 2025-10-01 13:44:53.647795094 +0000 UTC m=+0.186120418 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  1 09:44:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:44:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1564694016' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:44:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:44:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1564694016' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:44:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:44:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:44:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:44:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.468 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.468 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.468 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.469 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.469 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:45:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:45:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066229925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:45:01 np0005464214 nova_compute[260022]: 2025-10-01 13:45:01.930 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.134 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.135 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5109MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.135 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.136 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:45:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.857 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 6bc1aa4b-48ff-473e-afdb-d40e73f8c36c has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.858 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.858 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:45:02 np0005464214 nova_compute[260022]: 2025-10-01 13:45:02.892 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:45:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:45:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155356671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:45:03 np0005464214 nova_compute[260022]: 2025-10-01 13:45:03.292 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:45:03 np0005464214 nova_compute[260022]: 2025-10-01 13:45:03.298 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:45:03 np0005464214 nova_compute[260022]: 2025-10-01 13:45:03.328 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:45:03 np0005464214 nova_compute[260022]: 2025-10-01 13:45:03.330 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:45:03 np0005464214 nova_compute[260022]: 2025-10-01 13:45:03.330 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:45:04 np0005464214 nova_compute[260022]: 2025-10-01 13:45:04.330 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:04 np0005464214 nova_compute[260022]: 2025-10-01 13:45:04.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:05 np0005464214 nova_compute[260022]: 2025-10-01 13:45:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:05 np0005464214 nova_compute[260022]: 2025-10-01 13:45:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:45:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:07 np0005464214 nova_compute[260022]: 2025-10-01 13:45:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:07 np0005464214 nova_compute[260022]: 2025-10-01 13:45:07.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 09:45:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:08 np0005464214 nova_compute[260022]: 2025-10-01 13:45:08.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:08 np0005464214 nova_compute[260022]: 2025-10-01 13:45:08.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:08 np0005464214 nova_compute[260022]: 2025-10-01 13:45:08.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 09:45:08 np0005464214 nova_compute[260022]: 2025-10-01 13:45:08.373 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 09:45:08 np0005464214 nova_compute[260022]: 2025-10-01 13:45:08.374 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:09 np0005464214 nova_compute[260022]: 2025-10-01 13:45:09.436 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:09 np0005464214 nova_compute[260022]: 2025-10-01 13:45:09.436 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:45:09 np0005464214 nova_compute[260022]: 2025-10-01 13:45:09.437 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:45:09 np0005464214 nova_compute[260022]: 2025-10-01 13:45:09.460 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:45:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:11 np0005464214 nova_compute[260022]: 2025-10-01 13:45:11.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:11 np0005464214 nova_compute[260022]: 2025-10-01 13:45:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:45:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:45:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:45:12.314 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:45:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:45:12.314 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:45:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct  1 09:45:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct  1 09:45:13 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct  1 09:45:14 np0005464214 nova_compute[260022]: 2025-10-01 13:45:14.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:45:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:45:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:45:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:45:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:45:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:45:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:45:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:45:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:45:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:45:24 np0005464214 podman[278360]: 2025-10-01 13:45:24.554017895 +0000 UTC m=+0.084425057 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct  1 09:45:24 np0005464214 podman[278358]: 2025-10-01 13:45:24.561682176 +0000 UTC m=+0.099381688 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:45:24 np0005464214 podman[278359]: 2025-10-01 13:45:24.579687965 +0000 UTC m=+0.111569533 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:45:24 np0005464214 podman[278357]: 2025-10-01 13:45:24.600784371 +0000 UTC m=+0.145898237 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 09:45:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct  1 09:45:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  1 09:45:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  1 09:45:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:45:30.149 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:45:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:45:30.151 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:45:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct  1 09:45:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct  1 09:45:31 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct  1 09:45:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 09:45:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 09:45:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:45:35.154 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:45:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 09:45:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct  1 09:45:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct  1 09:45:37 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct  1 09:45:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 09:45:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  1 09:45:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:45:47
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images']
Oct  1 09:45:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:45:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:45:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 75af579d-4b9f-4336-b90c-bb41eeb5f7d0 does not exist
Oct  1 09:45:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2d3e94e4-b707-40ba-a2f4-f708907d7e4a does not exist
Oct  1 09:45:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5c0ec63f-d12d-4e8f-b96b-ba088d175437 does not exist
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.027004) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350027079, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1139, "num_deletes": 252, "total_data_size": 1667958, "memory_usage": 1690480, "flush_reason": "Manual Compaction"}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350040761, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1651378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24592, "largest_seqno": 25730, "table_properties": {"data_size": 1645729, "index_size": 3044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11783, "raw_average_key_size": 19, "raw_value_size": 1634488, "raw_average_value_size": 2765, "num_data_blocks": 136, "num_entries": 591, "num_filter_entries": 591, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326243, "oldest_key_time": 1759326243, "file_creation_time": 1759326350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13816 microseconds, and 8707 cpu microseconds.
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.040827) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1651378 bytes OK
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.040852) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.043594) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.043624) EVENT_LOG_v1 {"time_micros": 1759326350043614, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.043652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1662709, prev total WAL file size 1662709, number of live WAL files 2.
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.044948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1612KB)], [56(7682KB)]
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350045005, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9518682, "oldest_snapshot_seqno": -1}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4698 keys, 7758107 bytes, temperature: kUnknown
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350083417, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7758107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7725754, "index_size": 19507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117698, "raw_average_key_size": 25, "raw_value_size": 7639714, "raw_average_value_size": 1626, "num_data_blocks": 806, "num_entries": 4698, "num_filter_entries": 4698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.083757) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7758107 bytes
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.086547) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.1 rd, 201.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.7) OK, records in: 5217, records dropped: 519 output_compression: NoCompression
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.086573) EVENT_LOG_v1 {"time_micros": 1759326350086559, "job": 30, "event": "compaction_finished", "compaction_time_micros": 38529, "compaction_time_cpu_micros": 22900, "output_level": 6, "num_output_files": 1, "total_output_size": 7758107, "num_input_records": 5217, "num_output_records": 4698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350087353, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350090328, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.044833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:45:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.738878924 +0000 UTC m=+0.044660641 container create 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 09:45:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:50 np0005464214 systemd[1]: Started libpod-conmon-45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e.scope.
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.716982553 +0000 UTC m=+0.022764270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:45:50 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.850964853 +0000 UTC m=+0.156746590 container init 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.860190144 +0000 UTC m=+0.165971841 container start 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.865835502 +0000 UTC m=+0.171617199 container attach 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:45:50 np0005464214 elegant_cori[278725]: 167 167
Oct  1 09:45:50 np0005464214 systemd[1]: libpod-45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e.scope: Deactivated successfully.
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.86797046 +0000 UTC m=+0.173752147 container died 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:45:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b1dd3f8dbace0cbc83d6959de45a5828d854df34c8928606a5c1bb2d5913678b-merged.mount: Deactivated successfully.
Oct  1 09:45:50 np0005464214 podman[278709]: 2025-10-01 13:45:50.911762862 +0000 UTC m=+0.217544549 container remove 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:45:50 np0005464214 systemd[1]: libpod-conmon-45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e.scope: Deactivated successfully.
Oct  1 09:45:51 np0005464214 podman[278750]: 2025-10-01 13:45:51.129890348 +0000 UTC m=+0.059073456 container create 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:45:51 np0005464214 systemd[1]: Started libpod-conmon-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope.
Oct  1 09:45:51 np0005464214 podman[278750]: 2025-10-01 13:45:51.101991578 +0000 UTC m=+0.031174736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:45:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:45:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:51 np0005464214 podman[278750]: 2025-10-01 13:45:51.240958086 +0000 UTC m=+0.170141214 container init 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:45:51 np0005464214 podman[278750]: 2025-10-01 13:45:51.257357183 +0000 UTC m=+0.186540281 container start 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:45:51 np0005464214 podman[278750]: 2025-10-01 13:45:51.261773592 +0000 UTC m=+0.190956700 container attach 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:45:52 np0005464214 kind_golick[278766]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:45:52 np0005464214 kind_golick[278766]: --> relative data size: 1.0
Oct  1 09:45:52 np0005464214 kind_golick[278766]: --> All data devices are unavailable
Oct  1 09:45:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:52 np0005464214 systemd[1]: libpod-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope: Deactivated successfully.
Oct  1 09:45:52 np0005464214 podman[278750]: 2025-10-01 13:45:52.440486126 +0000 UTC m=+1.369669214 container died 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:45:52 np0005464214 systemd[1]: libpod-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope: Consumed 1.127s CPU time.
Oct  1 09:45:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88-merged.mount: Deactivated successfully.
Oct  1 09:45:52 np0005464214 podman[278750]: 2025-10-01 13:45:52.517041564 +0000 UTC m=+1.446224632 container remove 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 09:45:52 np0005464214 systemd[1]: libpod-conmon-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope: Deactivated successfully.
Oct  1 09:45:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.309670319 +0000 UTC m=+0.051200088 container create 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:45:53 np0005464214 systemd[1]: Started libpod-conmon-677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368.scope.
Oct  1 09:45:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.285992721 +0000 UTC m=+0.027522570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.39365723 +0000 UTC m=+0.135187009 container init 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.4041115 +0000 UTC m=+0.145641299 container start 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.407841398 +0000 UTC m=+0.149371167 container attach 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:45:53 np0005464214 relaxed_fermat[278965]: 167 167
Oct  1 09:45:53 np0005464214 systemd[1]: libpod-677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368.scope: Deactivated successfully.
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.412908898 +0000 UTC m=+0.154438667 container died 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:45:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-840851ab783916b7a621b7eee10a0acd5de93b94a806bec5d0daa61f6af0dd7f-merged.mount: Deactivated successfully.
Oct  1 09:45:53 np0005464214 podman[278949]: 2025-10-01 13:45:53.456858555 +0000 UTC m=+0.198388324 container remove 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:45:53 np0005464214 systemd[1]: libpod-conmon-677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368.scope: Deactivated successfully.
Oct  1 09:45:53 np0005464214 podman[278989]: 2025-10-01 13:45:53.676127698 +0000 UTC m=+0.056243016 container create 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:45:53 np0005464214 systemd[1]: Started libpod-conmon-6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46.scope.
Oct  1 09:45:53 np0005464214 podman[278989]: 2025-10-01 13:45:53.650500359 +0000 UTC m=+0.030615757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:45:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:45:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:53 np0005464214 podman[278989]: 2025-10-01 13:45:53.766884363 +0000 UTC m=+0.146999681 container init 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:45:53 np0005464214 podman[278989]: 2025-10-01 13:45:53.778389626 +0000 UTC m=+0.158504924 container start 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:45:53 np0005464214 podman[278989]: 2025-10-01 13:45:53.782937971 +0000 UTC m=+0.163053289 container attach 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]: {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:    "0": [
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:        {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "devices": [
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "/dev/loop3"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            ],
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_name": "ceph_lv0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_size": "21470642176",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "name": "ceph_lv0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "tags": {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cluster_name": "ceph",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.crush_device_class": "",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.encrypted": "0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osd_id": "0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.type": "block",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.vdo": "0"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            },
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "type": "block",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "vg_name": "ceph_vg0"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:        }
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:    ],
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:    "1": [
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:        {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "devices": [
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "/dev/loop4"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            ],
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_name": "ceph_lv1",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_size": "21470642176",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "name": "ceph_lv1",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "tags": {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cluster_name": "ceph",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.crush_device_class": "",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.encrypted": "0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osd_id": "1",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.type": "block",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.vdo": "0"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            },
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "type": "block",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "vg_name": "ceph_vg1"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:        }
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:    ],
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:    "2": [
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:        {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "devices": [
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "/dev/loop5"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            ],
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_name": "ceph_lv2",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_size": "21470642176",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "name": "ceph_lv2",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "tags": {
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.cluster_name": "ceph",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.crush_device_class": "",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.encrypted": "0",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osd_id": "2",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.type": "block",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:                "ceph.vdo": "0"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            },
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "type": "block",
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:            "vg_name": "ceph_vg2"
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:        }
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]:    ]
Oct  1 09:45:54 np0005464214 cranky_boyd[279005]: }
Oct  1 09:45:54 np0005464214 podman[278989]: 2025-10-01 13:45:54.562096469 +0000 UTC m=+0.942211797 container died 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:45:54 np0005464214 systemd[1]: libpod-6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46.scope: Deactivated successfully.
Oct  1 09:45:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1-merged.mount: Deactivated successfully.
Oct  1 09:45:54 np0005464214 podman[278989]: 2025-10-01 13:45:54.636973853 +0000 UTC m=+1.017089151 container remove 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:45:54 np0005464214 systemd[1]: libpod-conmon-6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46.scope: Deactivated successfully.
Oct  1 09:45:54 np0005464214 podman[279025]: 2025-10-01 13:45:54.711386383 +0000 UTC m=+0.079661247 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct  1 09:45:54 np0005464214 podman[279016]: 2025-10-01 13:45:54.728026908 +0000 UTC m=+0.122326842 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:45:54 np0005464214 podman[279023]: 2025-10-01 13:45:54.728051009 +0000 UTC m=+0.123356566 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  1 09:45:54 np0005464214 podman[279039]: 2025-10-01 13:45:54.768236698 +0000 UTC m=+0.132381911 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Oct  1 09:45:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:45:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113137374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:45:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:45:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113137374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.416010069 +0000 UTC m=+0.054817771 container create 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:45:55 np0005464214 systemd[1]: Started libpod-conmon-11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68.scope.
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.393639403 +0000 UTC m=+0.032447135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:45:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.523944317 +0000 UTC m=+0.162752029 container init 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.532648062 +0000 UTC m=+0.171455784 container start 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.537178275 +0000 UTC m=+0.175985967 container attach 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:45:55 np0005464214 gallant_wing[279255]: 167 167
Oct  1 09:45:55 np0005464214 systemd[1]: libpod-11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68.scope: Deactivated successfully.
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.543614138 +0000 UTC m=+0.182421840 container died 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:45:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-845b779a0aae1d8278706b21de4e9d50cde171f1b0dbce963a096f5c0aa0c99d-merged.mount: Deactivated successfully.
Oct  1 09:45:55 np0005464214 podman[279239]: 2025-10-01 13:45:55.595910229 +0000 UTC m=+0.234717941 container remove 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:45:55 np0005464214 systemd[1]: libpod-conmon-11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68.scope: Deactivated successfully.
Oct  1 09:45:55 np0005464214 podman[279278]: 2025-10-01 13:45:55.838069384 +0000 UTC m=+0.072619294 container create 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:45:55 np0005464214 systemd[1]: Started libpod-conmon-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope.
Oct  1 09:45:55 np0005464214 podman[279278]: 2025-10-01 13:45:55.807876491 +0000 UTC m=+0.042426471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:45:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:45:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:45:55 np0005464214 podman[279278]: 2025-10-01 13:45:55.943459142 +0000 UTC m=+0.178009022 container init 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:45:55 np0005464214 podman[279278]: 2025-10-01 13:45:55.956661248 +0000 UTC m=+0.191211138 container start 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:45:55 np0005464214 podman[279278]: 2025-10-01 13:45:55.96080743 +0000 UTC m=+0.195357370 container attach 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:45:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]: {
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "osd_id": 0,
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "type": "bluestore"
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:    },
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "osd_id": 2,
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "type": "bluestore"
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:    },
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "osd_id": 1,
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:        "type": "bluestore"
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]:    }
Oct  1 09:45:57 np0005464214 pedantic_merkle[279295]: }
Oct  1 09:45:57 np0005464214 systemd[1]: libpod-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope: Deactivated successfully.
Oct  1 09:45:57 np0005464214 podman[279278]: 2025-10-01 13:45:57.046717894 +0000 UTC m=+1.281267794 container died 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:45:57 np0005464214 systemd[1]: libpod-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope: Consumed 1.096s CPU time.
Oct  1 09:45:57 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef-merged.mount: Deactivated successfully.
Oct  1 09:45:57 np0005464214 podman[279278]: 2025-10-01 13:45:57.103489235 +0000 UTC m=+1.338039115 container remove 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:45:57 np0005464214 systemd[1]: libpod-conmon-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope: Deactivated successfully.
Oct  1 09:45:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:45:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:45:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:45:57 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b453c2e3-1064-48e8-8d05-46ceff8f3127 does not exist
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ae40c87-7e53-4852-aff3-653038d07db2 does not exist
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:45:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:45:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:45:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:45:58 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:45:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:02 np0005464214 nova_compute[260022]: 2025-10-01 13:46:02.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.465 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.465 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.466 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.466 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.466 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:46:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:46:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268064282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:46:03 np0005464214 nova_compute[260022]: 2025-10-01 13:46:03.893 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.053 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.054 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.054 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.054 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.398 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.398 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.413 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.478 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.478 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.497 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.520 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 09:46:04 np0005464214 nova_compute[260022]: 2025-10-01 13:46:04.535 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:46:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:46:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652188545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:46:05 np0005464214 nova_compute[260022]: 2025-10-01 13:46:05.000 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:46:05 np0005464214 nova_compute[260022]: 2025-10-01 13:46:05.006 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:46:05 np0005464214 nova_compute[260022]: 2025-10-01 13:46:05.178 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:46:05 np0005464214 nova_compute[260022]: 2025-10-01 13:46:05.182 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:46:05 np0005464214 nova_compute[260022]: 2025-10-01 13:46:05.182 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:46:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:08 np0005464214 nova_compute[260022]: 2025-10-01 13:46:08.178 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:08 np0005464214 nova_compute[260022]: 2025-10-01 13:46:08.179 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:08 np0005464214 nova_compute[260022]: 2025-10-01 13:46:08.179 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:46:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:09 np0005464214 nova_compute[260022]: 2025-10-01 13:46:09.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:09 np0005464214 nova_compute[260022]: 2025-10-01 13:46:09.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:46:09 np0005464214 nova_compute[260022]: 2025-10-01 13:46:09.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:46:09 np0005464214 nova_compute[260022]: 2025-10-01 13:46:09.396 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:46:09 np0005464214 nova_compute[260022]: 2025-10-01 13:46:09.396 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:46:12.314 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:46:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:46:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:46:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:46:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:46:12 np0005464214 nova_compute[260022]: 2025-10-01 13:46:12.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:13 np0005464214 nova_compute[260022]: 2025-10-01 13:46:13.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:15 np0005464214 nova_compute[260022]: 2025-10-01 13:46:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:17 np0005464214 nova_compute[260022]: 2025-10-01 13:46:17.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:46:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:46:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:46:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:46:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:46:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:46:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:46:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:25 np0005464214 podman[279438]: 2025-10-01 13:46:25.557074865 +0000 UTC m=+0.088647397 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:46:25 np0005464214 podman[279437]: 2025-10-01 13:46:25.557241661 +0000 UTC m=+0.092262253 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 09:46:25 np0005464214 podman[279439]: 2025-10-01 13:46:25.596724575 +0000 UTC m=+0.118219217 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct  1 09:46:25 np0005464214 podman[279436]: 2025-10-01 13:46:25.597047186 +0000 UTC m=+0.140001960 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923)
Oct  1 09:46:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:31 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:46:31.443 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:46:31 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:46:31.445 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:46:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:36 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:46:36.447 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:46:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:46:47
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'backups', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct  1 09:46:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:46:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:46:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312553878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:46:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:46:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312553878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:46:56 np0005464214 podman[279521]: 2025-10-01 13:46:56.543990749 +0000 UTC m=+0.088030328 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:46:56 np0005464214 podman[279523]: 2025-10-01 13:46:56.546328204 +0000 UTC m=+0.085486428 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:46:56 np0005464214 podman[279522]: 2025-10-01 13:46:56.557907371 +0000 UTC m=+0.090338091 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 09:46:56 np0005464214 podman[279520]: 2025-10-01 13:46:56.573540458 +0000 UTC m=+0.115382197 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 09:46:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:46:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:46:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:46:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2245bf68-ea2b-4f19-9320-ae6348cf57fa does not exist
Oct  1 09:46:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3d3cf343-1157-4449-8212-e1fdd3a03718 does not exist
Oct  1 09:46:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a61faa59-1e9f-4137-8a8a-2d9d4c052011 does not exist
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:46:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:46:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:46:58 np0005464214 podman[279877]: 2025-10-01 13:46:58.9238835 +0000 UTC m=+0.068850819 container create f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:46:58 np0005464214 systemd[1]: Started libpod-conmon-f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca.scope.
Oct  1 09:46:58 np0005464214 podman[279877]: 2025-10-01 13:46:58.888676921 +0000 UTC m=+0.033644330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:46:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:46:59 np0005464214 podman[279877]: 2025-10-01 13:46:59.03121696 +0000 UTC m=+0.176184309 container init f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:46:59 np0005464214 podman[279877]: 2025-10-01 13:46:59.040706262 +0000 UTC m=+0.185673611 container start f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:46:59 np0005464214 podman[279877]: 2025-10-01 13:46:59.045538556 +0000 UTC m=+0.190505875 container attach f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 09:46:59 np0005464214 funny_mahavira[279893]: 167 167
Oct  1 09:46:59 np0005464214 podman[279877]: 2025-10-01 13:46:59.048879332 +0000 UTC m=+0.193846681 container died f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:46:59 np0005464214 systemd[1]: libpod-f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca.scope: Deactivated successfully.
Oct  1 09:46:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c059f40f72b3a23614cd069d36d065f62517dd5ff9c5ed8d19d44ebb46b99280-merged.mount: Deactivated successfully.
Oct  1 09:46:59 np0005464214 podman[279877]: 2025-10-01 13:46:59.103493417 +0000 UTC m=+0.248460746 container remove f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:46:59 np0005464214 systemd[1]: libpod-conmon-f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca.scope: Deactivated successfully.
Oct  1 09:46:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:46:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:46:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:46:59 np0005464214 podman[279917]: 2025-10-01 13:46:59.332191834 +0000 UTC m=+0.044286999 container create 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:46:59 np0005464214 systemd[1]: Started libpod-conmon-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope.
Oct  1 09:46:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:46:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:46:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:46:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:46:59 np0005464214 podman[279917]: 2025-10-01 13:46:59.314180392 +0000 UTC m=+0.026275587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:46:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:46:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:46:59 np0005464214 podman[279917]: 2025-10-01 13:46:59.427260465 +0000 UTC m=+0.139355660 container init 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:46:59 np0005464214 podman[279917]: 2025-10-01 13:46:59.440845667 +0000 UTC m=+0.152940832 container start 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:46:59 np0005464214 podman[279917]: 2025-10-01 13:46:59.44505752 +0000 UTC m=+0.157152685 container attach 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:47:00 np0005464214 sweet_hopper[279934]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:47:00 np0005464214 sweet_hopper[279934]: --> relative data size: 1.0
Oct  1 09:47:00 np0005464214 sweet_hopper[279934]: --> All data devices are unavailable
Oct  1 09:47:00 np0005464214 systemd[1]: libpod-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope: Deactivated successfully.
Oct  1 09:47:00 np0005464214 systemd[1]: libpod-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope: Consumed 1.229s CPU time.
Oct  1 09:47:00 np0005464214 podman[279917]: 2025-10-01 13:47:00.711851283 +0000 UTC m=+1.423946468 container died 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:47:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b-merged.mount: Deactivated successfully.
Oct  1 09:47:00 np0005464214 podman[279917]: 2025-10-01 13:47:00.787376473 +0000 UTC m=+1.499471668 container remove 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:47:00 np0005464214 systemd[1]: libpod-conmon-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope: Deactivated successfully.
Oct  1 09:47:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.628597742 +0000 UTC m=+0.069509820 container create fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:47:01 np0005464214 systemd[1]: Started libpod-conmon-fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842.scope.
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.602914126 +0000 UTC m=+0.043826304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:47:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.718921072 +0000 UTC m=+0.159833170 container init fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.727082361 +0000 UTC m=+0.167994439 container start fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.730948445 +0000 UTC m=+0.171860543 container attach fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:47:01 np0005464214 lucid_antonelli[280135]: 167 167
Oct  1 09:47:01 np0005464214 systemd[1]: libpod-fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842.scope: Deactivated successfully.
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.735951893 +0000 UTC m=+0.176863971 container died fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:47:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-39f3897de954edc3c5cff937ac9dbd13afe7e020150ce3042b4bbfda72cf1a35-merged.mount: Deactivated successfully.
Oct  1 09:47:01 np0005464214 podman[280118]: 2025-10-01 13:47:01.772755783 +0000 UTC m=+0.213667861 container remove fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:47:01 np0005464214 systemd[1]: libpod-conmon-fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842.scope: Deactivated successfully.
Oct  1 09:47:01 np0005464214 podman[280159]: 2025-10-01 13:47:01.956801081 +0000 UTC m=+0.060262006 container create f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:47:01 np0005464214 systemd[1]: Started libpod-conmon-f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd.scope.
Oct  1 09:47:02 np0005464214 podman[280159]: 2025-10-01 13:47:01.930082802 +0000 UTC m=+0.033543817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:47:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:47:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:02 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:02 np0005464214 podman[280159]: 2025-10-01 13:47:02.059767762 +0000 UTC m=+0.163228687 container init f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:47:02 np0005464214 podman[280159]: 2025-10-01 13:47:02.069505262 +0000 UTC m=+0.172966187 container start f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:47:02 np0005464214 podman[280159]: 2025-10-01 13:47:02.073000293 +0000 UTC m=+0.176461218 container attach f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct  1 09:47:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct  1 09:47:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct  1 09:47:02 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct  1 09:47:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]: {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:    "0": [
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:        {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "devices": [
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "/dev/loop3"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            ],
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_name": "ceph_lv0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_size": "21470642176",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "name": "ceph_lv0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "tags": {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cluster_name": "ceph",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.crush_device_class": "",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.encrypted": "0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osd_id": "0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.type": "block",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.vdo": "0"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            },
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "type": "block",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "vg_name": "ceph_vg0"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:        }
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:    ],
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:    "1": [
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:        {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "devices": [
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "/dev/loop4"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            ],
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_name": "ceph_lv1",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_size": "21470642176",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "name": "ceph_lv1",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "tags": {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cluster_name": "ceph",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.crush_device_class": "",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.encrypted": "0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osd_id": "1",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.type": "block",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.vdo": "0"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            },
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "type": "block",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "vg_name": "ceph_vg1"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:        }
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:    ],
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:    "2": [
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:        {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "devices": [
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "/dev/loop5"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            ],
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_name": "ceph_lv2",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_size": "21470642176",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "name": "ceph_lv2",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "tags": {
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.cluster_name": "ceph",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.crush_device_class": "",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.encrypted": "0",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osd_id": "2",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.type": "block",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:                "ceph.vdo": "0"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            },
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "type": "block",
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:            "vg_name": "ceph_vg2"
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:        }
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]:    ]
Oct  1 09:47:02 np0005464214 friendly_blackburn[280176]: }
Oct  1 09:47:02 np0005464214 systemd[1]: libpod-f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd.scope: Deactivated successfully.
Oct  1 09:47:02 np0005464214 podman[280159]: 2025-10-01 13:47:02.946296622 +0000 UTC m=+1.049757557 container died f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 09:47:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8-merged.mount: Deactivated successfully.
Oct  1 09:47:03 np0005464214 podman[280159]: 2025-10-01 13:47:03.012869118 +0000 UTC m=+1.116330043 container remove f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:47:03 np0005464214 systemd[1]: libpod-conmon-f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd.scope: Deactivated successfully.
Oct  1 09:47:03 np0005464214 nova_compute[260022]: 2025-10-01 13:47:03.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.727449444 +0000 UTC m=+0.039604010 container create 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:47:03 np0005464214 systemd[1]: Started libpod-conmon-49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3.scope.
Oct  1 09:47:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.709450561 +0000 UTC m=+0.021605137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.809259062 +0000 UTC m=+0.121413628 container init 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.816650608 +0000 UTC m=+0.128805164 container start 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.819568881 +0000 UTC m=+0.131723437 container attach 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:47:03 np0005464214 compassionate_goldstine[280359]: 167 167
Oct  1 09:47:03 np0005464214 systemd[1]: libpod-49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3.scope: Deactivated successfully.
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.828136073 +0000 UTC m=+0.140290649 container died 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:47:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8db80256abd2848d755fa135a4bb8bdb74bc165263f7902a0048abbb44185303-merged.mount: Deactivated successfully.
Oct  1 09:47:03 np0005464214 podman[280343]: 2025-10-01 13:47:03.867524085 +0000 UTC m=+0.179678641 container remove 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:47:03 np0005464214 systemd[1]: libpod-conmon-49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3.scope: Deactivated successfully.
Oct  1 09:47:04 np0005464214 podman[280383]: 2025-10-01 13:47:04.069273154 +0000 UTC m=+0.056963040 container create 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:47:04 np0005464214 systemd[1]: Started libpod-conmon-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope.
Oct  1 09:47:04 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:47:04 np0005464214 podman[280383]: 2025-10-01 13:47:04.050048213 +0000 UTC m=+0.037738119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:47:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:04 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:47:04 np0005464214 podman[280383]: 2025-10-01 13:47:04.16199632 +0000 UTC m=+0.149686256 container init 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:47:04 np0005464214 podman[280383]: 2025-10-01 13:47:04.168942682 +0000 UTC m=+0.156632588 container start 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:47:04 np0005464214 podman[280383]: 2025-10-01 13:47:04.172552597 +0000 UTC m=+0.160242483 container attach 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.371 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.371 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.372 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.372 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.372 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:47:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:47:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271199026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:47:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 09:47:04 np0005464214 nova_compute[260022]: 2025-10-01 13:47:04.826 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.045 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.046 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5064MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.047 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.047 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.145 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.146 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.146 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:47:05 np0005464214 festive_hoover[280399]: {
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "osd_id": 0,
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "type": "bluestore"
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:    },
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "osd_id": 2,
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "type": "bluestore"
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:    },
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "osd_id": 1,
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:        "type": "bluestore"
Oct  1 09:47:05 np0005464214 festive_hoover[280399]:    }
Oct  1 09:47:05 np0005464214 festive_hoover[280399]: }
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.184 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:47:05 np0005464214 systemd[1]: libpod-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope: Deactivated successfully.
Oct  1 09:47:05 np0005464214 podman[280383]: 2025-10-01 13:47:05.202343248 +0000 UTC m=+1.190033134 container died 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:47:05 np0005464214 systemd[1]: libpod-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope: Consumed 1.010s CPU time.
Oct  1 09:47:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63-merged.mount: Deactivated successfully.
Oct  1 09:47:05 np0005464214 podman[280383]: 2025-10-01 13:47:05.257405167 +0000 UTC m=+1.245095053 container remove 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:47:05 np0005464214 systemd[1]: libpod-conmon-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope: Deactivated successfully.
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:47:05 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 354bc6d8-eac4-45e9-bad5-cedf753ad956 does not exist
Oct  1 09:47:05 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6d333d67-577c-44d6-9943-844da9a5b104 does not exist
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478062052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.651 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.657 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.670 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.672 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:47:05 np0005464214 nova_compute[260022]: 2025-10-01 13:47:05.672 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:47:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:47:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  1 09:47:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:08 np0005464214 nova_compute[260022]: 2025-10-01 13:47:08.668 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:08 np0005464214 nova_compute[260022]: 2025-10-01 13:47:08.668 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:08 np0005464214 nova_compute[260022]: 2025-10-01 13:47:08.669 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:47:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 37 op/s
Oct  1 09:47:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct  1 09:47:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct  1 09:47:08 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct  1 09:47:10 np0005464214 nova_compute[260022]: 2025-10-01 13:47:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  1 09:47:11 np0005464214 nova_compute[260022]: 2025-10-01 13:47:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:11 np0005464214 nova_compute[260022]: 2025-10-01 13:47:11.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:47:11 np0005464214 nova_compute[260022]: 2025-10-01 13:47:11.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:47:11 np0005464214 nova_compute[260022]: 2025-10-01 13:47:11.451 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:47:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:47:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:47:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:47:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:47:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:47:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:47:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 37 op/s
Oct  1 09:47:13 np0005464214 nova_compute[260022]: 2025-10-01 13:47:13.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:14 np0005464214 nova_compute[260022]: 2025-10-01 13:47:14.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.6 KiB/s wr, 33 op/s
Oct  1 09:47:16 np0005464214 nova_compute[260022]: 2025-10-01 13:47:16.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:47:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct  1 09:47:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct  1 09:47:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct  1 09:47:16 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct  1 09:47:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:47:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:47:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:47:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:47:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:47:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:47:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct  1 09:47:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct  1 09:47:17 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct  1 09:47:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 KiB/s wr, 46 op/s
Oct  1 09:47:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct  1 09:47:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct  1 09:47:18 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct  1 09:47:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Oct  1 09:47:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct  1 09:47:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct  1 09:47:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct  1 09:47:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.2 KiB/s wr, 127 op/s
Oct  1 09:47:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.7 KiB/s wr, 76 op/s
Oct  1 09:47:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Oct  1 09:47:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct  1 09:47:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct  1 09:47:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct  1 09:47:27 np0005464214 podman[280542]: 2025-10-01 13:47:27.541769407 +0000 UTC m=+0.083014858 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 09:47:27 np0005464214 podman[280544]: 2025-10-01 13:47:27.541916772 +0000 UTC m=+0.074731315 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:47:27 np0005464214 podman[280541]: 2025-10-01 13:47:27.586379684 +0000 UTC m=+0.136543889 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:47:27 np0005464214 podman[280540]: 2025-10-01 13:47:27.60261033 +0000 UTC m=+0.153611492 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:47:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Oct  1 09:47:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Oct  1 09:47:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:47:32.414 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:47:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:47:32.415 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:47:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:41 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:47:41.419 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:47:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:47:47
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Oct  1 09:47:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:47:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:47:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882943040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:47:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:47:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882943040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:47:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:47:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:47:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:47:58 np0005464214 podman[280629]: 2025-10-01 13:47:58.528278343 +0000 UTC m=+0.060225596 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  1 09:47:58 np0005464214 podman[280627]: 2025-10-01 13:47:58.538355102 +0000 UTC m=+0.078384551 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:47:58 np0005464214 podman[280628]: 2025-10-01 13:47:58.540569223 +0000 UTC m=+0.070730928 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:47:58 np0005464214 podman[280626]: 2025-10-01 13:47:58.565882877 +0000 UTC m=+0.110899515 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct  1 09:47:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:48:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5963 writes, 26K keys, 5963 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5963 writes, 5963 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1384 writes, 6249 keys, 1384 commit groups, 1.0 writes per commit group, ingest: 9.02 MB, 0.02 MB/s#012Interval WAL: 1384 writes, 1384 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.0      1.98              0.12        15    0.132       0      0       0.0       0.0#012  L6      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     37.6     30.9      3.30              0.38        14    0.235     64K   7722       0.0       0.0#012 Sum      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     23.5     24.9      5.28              0.50        29    0.182     64K   7722       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     98.4     99.0      0.40              0.16         8    0.050     21K   2560       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     37.6     30.9      3.30              0.38        14    0.235     64K   7722       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.0      1.97              0.12        14    0.141       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 5.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 13.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000237 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(856,12.88 MB,4.23537%) FilterBlock(30,185.05 KB,0.059444%) IndexBlock(30,341.62 KB,0.109743%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:48:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.348 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.405 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.406 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.407 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:48:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:48:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330084154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:48:04 np0005464214 nova_compute[260022]: 2025-10-01 13:48:04.833 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:48:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.035 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.036 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.037 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.037 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.197 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.198 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.198 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.241 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:48:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:48:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788869825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.678 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.684 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.736 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.737 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:48:05 np0005464214 nova_compute[260022]: 2025-10-01 13:48:05.738 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:48:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 538c297f-918d-4018-a1fe-105fa1f0dea3 does not exist
Oct  1 09:48:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8bd8dcbf-17e8-4961-b308-143b5b67dc26 does not exist
Oct  1 09:48:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 21917a58-3955-4263-ae13-03dcad7d966a does not exist
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:48:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:48:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:06 np0005464214 podman[281023]: 2025-10-01 13:48:06.91814909 +0000 UTC m=+0.049299497 container create 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:48:06 np0005464214 systemd[1]: Started libpod-conmon-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope.
Oct  1 09:48:06 np0005464214 podman[281023]: 2025-10-01 13:48:06.896104339 +0000 UTC m=+0.027254836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:48:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:48:07 np0005464214 podman[281023]: 2025-10-01 13:48:07.035183998 +0000 UTC m=+0.166334495 container init 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:48:07 np0005464214 podman[281023]: 2025-10-01 13:48:07.047209361 +0000 UTC m=+0.178359808 container start 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:48:07 np0005464214 podman[281023]: 2025-10-01 13:48:07.051455535 +0000 UTC m=+0.182605952 container attach 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:48:07 np0005464214 upbeat_mcnulty[281040]: 167 167
Oct  1 09:48:07 np0005464214 systemd[1]: libpod-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope: Deactivated successfully.
Oct  1 09:48:07 np0005464214 conmon[281040]: conmon 039739ad2e21c39cbe67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope/container/memory.events
Oct  1 09:48:07 np0005464214 podman[281023]: 2025-10-01 13:48:07.057773176 +0000 UTC m=+0.188923593 container died 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:48:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:48:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:48:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:48:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-76664d89fa0670707b437d45ef5cc95128390940574cabdd27338878053d40e5-merged.mount: Deactivated successfully.
Oct  1 09:48:07 np0005464214 podman[281023]: 2025-10-01 13:48:07.11705218 +0000 UTC m=+0.248202627 container remove 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:48:07 np0005464214 systemd[1]: libpod-conmon-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope: Deactivated successfully.
Oct  1 09:48:07 np0005464214 podman[281064]: 2025-10-01 13:48:07.354959699 +0000 UTC m=+0.074654932 container create 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:48:07 np0005464214 systemd[1]: Started libpod-conmon-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope.
Oct  1 09:48:07 np0005464214 podman[281064]: 2025-10-01 13:48:07.323605393 +0000 UTC m=+0.043300726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:48:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:48:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:07 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:07 np0005464214 podman[281064]: 2025-10-01 13:48:07.471970787 +0000 UTC m=+0.191666040 container init 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:48:07 np0005464214 podman[281064]: 2025-10-01 13:48:07.486551031 +0000 UTC m=+0.206246304 container start 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:48:07 np0005464214 podman[281064]: 2025-10-01 13:48:07.491000632 +0000 UTC m=+0.210695915 container attach 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:48:08 np0005464214 objective_clarke[281081]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:48:08 np0005464214 objective_clarke[281081]: --> relative data size: 1.0
Oct  1 09:48:08 np0005464214 objective_clarke[281081]: --> All data devices are unavailable
Oct  1 09:48:08 np0005464214 systemd[1]: libpod-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope: Deactivated successfully.
Oct  1 09:48:08 np0005464214 systemd[1]: libpod-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope: Consumed 1.110s CPU time.
Oct  1 09:48:08 np0005464214 podman[281064]: 2025-10-01 13:48:08.649963708 +0000 UTC m=+1.369658951 container died 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:48:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10-merged.mount: Deactivated successfully.
Oct  1 09:48:08 np0005464214 podman[281064]: 2025-10-01 13:48:08.706941088 +0000 UTC m=+1.426636341 container remove 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:48:08 np0005464214 systemd[1]: libpod-conmon-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope: Deactivated successfully.
Oct  1 09:48:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.538086878 +0000 UTC m=+0.068409154 container create f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct  1 09:48:09 np0005464214 systemd[1]: Started libpod-conmon-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope.
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.508372684 +0000 UTC m=+0.038695010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:48:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.641775013 +0000 UTC m=+0.172097319 container init f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.654879099 +0000 UTC m=+0.185201335 container start f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.658405751 +0000 UTC m=+0.188728077 container attach f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:48:09 np0005464214 systemd[1]: libpod-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope: Deactivated successfully.
Oct  1 09:48:09 np0005464214 brave_moore[281277]: 167 167
Oct  1 09:48:09 np0005464214 conmon[281277]: conmon f5e54a5a25554fed6db8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope/container/memory.events
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.665232288 +0000 UTC m=+0.195554524 container died f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:48:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cacd3208b32d25f56745411fd4412298766162860188cf1040112937c82e8a9f-merged.mount: Deactivated successfully.
Oct  1 09:48:09 np0005464214 podman[281263]: 2025-10-01 13:48:09.702038248 +0000 UTC m=+0.232360484 container remove f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:48:09 np0005464214 systemd[1]: libpod-conmon-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope: Deactivated successfully.
Oct  1 09:48:09 np0005464214 nova_compute[260022]: 2025-10-01 13:48:09.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:09 np0005464214 nova_compute[260022]: 2025-10-01 13:48:09.735 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:09 np0005464214 nova_compute[260022]: 2025-10-01 13:48:09.736 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:48:09 np0005464214 podman[281305]: 2025-10-01 13:48:09.900787083 +0000 UTC m=+0.053829361 container create aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:48:09 np0005464214 systemd[1]: Started libpod-conmon-aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053.scope.
Oct  1 09:48:09 np0005464214 podman[281305]: 2025-10-01 13:48:09.874514278 +0000 UTC m=+0.027556596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:48:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:48:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:10 np0005464214 podman[281305]: 2025-10-01 13:48:10.007386371 +0000 UTC m=+0.160428699 container init aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:48:10 np0005464214 podman[281305]: 2025-10-01 13:48:10.018570626 +0000 UTC m=+0.171612904 container start aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:48:10 np0005464214 podman[281305]: 2025-10-01 13:48:10.022787689 +0000 UTC m=+0.175829977 container attach aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:48:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]: {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:    "0": [
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:        {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "devices": [
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "/dev/loop3"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            ],
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_name": "ceph_lv0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_size": "21470642176",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "name": "ceph_lv0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "tags": {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cluster_name": "ceph",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.crush_device_class": "",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.encrypted": "0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osd_id": "0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.type": "block",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.vdo": "0"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            },
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "type": "block",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "vg_name": "ceph_vg0"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:        }
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:    ],
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:    "1": [
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:        {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "devices": [
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "/dev/loop4"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            ],
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_name": "ceph_lv1",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_size": "21470642176",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "name": "ceph_lv1",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "tags": {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cluster_name": "ceph",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.crush_device_class": "",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.encrypted": "0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osd_id": "1",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.type": "block",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.vdo": "0"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            },
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "type": "block",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "vg_name": "ceph_vg1"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:        }
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:    ],
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:    "2": [
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:        {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "devices": [
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "/dev/loop5"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            ],
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_name": "ceph_lv2",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_size": "21470642176",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "name": "ceph_lv2",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "tags": {
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.cluster_name": "ceph",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.crush_device_class": "",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.encrypted": "0",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osd_id": "2",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.type": "block",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:                "ceph.vdo": "0"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            },
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "type": "block",
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:            "vg_name": "ceph_vg2"
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:        }
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]:    ]
Oct  1 09:48:10 np0005464214 youthful_torvalds[281320]: }
Oct  1 09:48:10 np0005464214 systemd[1]: libpod-aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053.scope: Deactivated successfully.
Oct  1 09:48:10 np0005464214 podman[281305]: 2025-10-01 13:48:10.898813055 +0000 UTC m=+1.051855333 container died aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:48:10 np0005464214 systemd[1]: var-lib-containers-storage-overlay-12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f-merged.mount: Deactivated successfully.
Oct  1 09:48:10 np0005464214 podman[281305]: 2025-10-01 13:48:10.973025294 +0000 UTC m=+1.126067542 container remove aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:48:10 np0005464214 systemd[1]: libpod-conmon-aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053.scope: Deactivated successfully.
Oct  1 09:48:11 np0005464214 nova_compute[260022]: 2025-10-01 13:48:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:11 np0005464214 nova_compute[260022]: 2025-10-01 13:48:11.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:48:11 np0005464214 nova_compute[260022]: 2025-10-01 13:48:11.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:48:11 np0005464214 nova_compute[260022]: 2025-10-01 13:48:11.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:48:11 np0005464214 nova_compute[260022]: 2025-10-01 13:48:11.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.755272609 +0000 UTC m=+0.044637259 container create d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:48:11 np0005464214 systemd[1]: Started libpod-conmon-d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f.scope.
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.736635697 +0000 UTC m=+0.026000357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:48:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.85381419 +0000 UTC m=+0.143178870 container init d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.864723597 +0000 UTC m=+0.154088237 container start d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.868844958 +0000 UTC m=+0.158209618 container attach d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:48:11 np0005464214 kind_hertz[281500]: 167 167
Oct  1 09:48:11 np0005464214 systemd[1]: libpod-d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f.scope: Deactivated successfully.
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.873584149 +0000 UTC m=+0.162948789 container died d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 09:48:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a3aa24f3b760d9da1e44bec6559f8f198bd2df160d24aa81fb0845d35dbe3aaa-merged.mount: Deactivated successfully.
Oct  1 09:48:11 np0005464214 podman[281484]: 2025-10-01 13:48:11.929695122 +0000 UTC m=+0.219059742 container remove d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:48:11 np0005464214 systemd[1]: libpod-conmon-d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f.scope: Deactivated successfully.
Oct  1 09:48:12 np0005464214 podman[281524]: 2025-10-01 13:48:12.16885102 +0000 UTC m=+0.048136470 container create e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:48:12 np0005464214 systemd[1]: Started libpod-conmon-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope.
Oct  1 09:48:12 np0005464214 podman[281524]: 2025-10-01 13:48:12.147377798 +0000 UTC m=+0.026663228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:48:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:48:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:48:12 np0005464214 podman[281524]: 2025-10-01 13:48:12.270766879 +0000 UTC m=+0.150052369 container init e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:48:12 np0005464214 podman[281524]: 2025-10-01 13:48:12.283009058 +0000 UTC m=+0.162294458 container start e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:48:12 np0005464214 podman[281524]: 2025-10-01 13:48:12.287960715 +0000 UTC m=+0.167246165 container attach e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:48:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:48:12.316 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:48:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:48:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:48:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:48:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:48:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]: {
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "osd_id": 0,
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "type": "bluestore"
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:    },
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "osd_id": 2,
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "type": "bluestore"
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:    },
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "osd_id": 1,
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:        "type": "bluestore"
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]:    }
Oct  1 09:48:13 np0005464214 recursing_swanson[281541]: }
Oct  1 09:48:13 np0005464214 systemd[1]: libpod-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope: Deactivated successfully.
Oct  1 09:48:13 np0005464214 podman[281524]: 2025-10-01 13:48:13.413814919 +0000 UTC m=+1.293100339 container died e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:48:13 np0005464214 systemd[1]: libpod-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope: Consumed 1.134s CPU time.
Oct  1 09:48:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9-merged.mount: Deactivated successfully.
Oct  1 09:48:13 np0005464214 podman[281524]: 2025-10-01 13:48:13.48401165 +0000 UTC m=+1.363297070 container remove e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:48:13 np0005464214 systemd[1]: libpod-conmon-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope: Deactivated successfully.
Oct  1 09:48:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:48:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:48:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:48:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:48:13 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e55e401f-fd6c-4af4-acd6-5f6d6d2cae2d does not exist
Oct  1 09:48:13 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c4735f57-6298-47a9-878f-1eacf6fe1d8c does not exist
Oct  1 09:48:14 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:48:14 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:48:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:15 np0005464214 nova_compute[260022]: 2025-10-01 13:48:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:15 np0005464214 nova_compute[260022]: 2025-10-01 13:48:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:48:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:48:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:48:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:48:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:48:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:48:18 np0005464214 nova_compute[260022]: 2025-10-01 13:48:18.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:22 np0005464214 nova_compute[260022]: 2025-10-01 13:48:22.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:48:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:29 np0005464214 podman[281640]: 2025-10-01 13:48:29.557108608 +0000 UTC m=+0.089960409 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:48:29 np0005464214 podman[281639]: 2025-10-01 13:48:29.558999408 +0000 UTC m=+0.098591154 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:48:29 np0005464214 podman[281641]: 2025-10-01 13:48:29.589791967 +0000 UTC m=+0.116677219 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 09:48:29 np0005464214 podman[281638]: 2025-10-01 13:48:29.613003444 +0000 UTC m=+0.154651545 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 09:48:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:48:32.887 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:48:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:48:32.888 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:48:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:42 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:48:42.890 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:48:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:48:47
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Oct  1 09:48:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:48:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:48:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922701101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:48:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:48:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922701101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:48:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:48:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:48:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:48:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:00 np0005464214 podman[281731]: 2025-10-01 13:49:00.533557125 +0000 UTC m=+0.059201672 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:49:00 np0005464214 podman[281719]: 2025-10-01 13:49:00.539654059 +0000 UTC m=+0.085135967 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 09:49:00 np0005464214 podman[281725]: 2025-10-01 13:49:00.540225006 +0000 UTC m=+0.075091487 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:49:00 np0005464214 podman[281718]: 2025-10-01 13:49:00.591587459 +0000 UTC m=+0.143565734 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Oct  1 09:49:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:04 np0005464214 nova_compute[260022]: 2025-10-01 13:49:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.378 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.379 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:49:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:49:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231492092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:49:06 np0005464214 nova_compute[260022]: 2025-10-01 13:49:06.832 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:49:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.023 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.024 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.024 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.024 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.200 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.201 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.201 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.253 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:49:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:49:07 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426256805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.740 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.748 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.829 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.831 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:49:07 np0005464214 nova_compute[260022]: 2025-10-01 13:49:07.831 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:49:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:10 np0005464214 nova_compute[260022]: 2025-10-01 13:49:10.827 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:10 np0005464214 nova_compute[260022]: 2025-10-01 13:49:10.827 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:10 np0005464214 nova_compute[260022]: 2025-10-01 13:49:10.828 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:49:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:49:12.317 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:49:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:49:12.317 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:49:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:49:12.317 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:49:12 np0005464214 nova_compute[260022]: 2025-10-01 13:49:12.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:12 np0005464214 nova_compute[260022]: 2025-10-01 13:49:12.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:49:12 np0005464214 nova_compute[260022]: 2025-10-01 13:49:12.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:49:12 np0005464214 nova_compute[260022]: 2025-10-01 13:49:12.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:49:12 np0005464214 nova_compute[260022]: 2025-10-01 13:49:12.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:49:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d41a8666-32d5-4677-ad76-7c691637dd21 does not exist
Oct  1 09:49:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 66a27645-a620-49f2-a271-05994af9096e does not exist
Oct  1 09:49:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ae33c732-8835-4d24-8ce0-b60b40facfba does not exist
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:49:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:49:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:49:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:49:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:49:15 np0005464214 nova_compute[260022]: 2025-10-01 13:49:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:15 np0005464214 podman[282110]: 2025-10-01 13:49:15.418336657 +0000 UTC m=+0.023366724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:49:15 np0005464214 podman[282110]: 2025-10-01 13:49:15.546320704 +0000 UTC m=+0.151350751 container create 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:49:15 np0005464214 systemd[1]: Started libpod-conmon-1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873.scope.
Oct  1 09:49:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:49:15 np0005464214 podman[282110]: 2025-10-01 13:49:15.775079192 +0000 UTC m=+0.380109249 container init 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:49:15 np0005464214 podman[282110]: 2025-10-01 13:49:15.790573835 +0000 UTC m=+0.395603892 container start 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:49:15 np0005464214 bold_bose[282126]: 167 167
Oct  1 09:49:15 np0005464214 systemd[1]: libpod-1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873.scope: Deactivated successfully.
Oct  1 09:49:15 np0005464214 podman[282110]: 2025-10-01 13:49:15.911352943 +0000 UTC m=+0.516383050 container attach 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:49:15 np0005464214 podman[282110]: 2025-10-01 13:49:15.913687677 +0000 UTC m=+0.518717734 container died 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:49:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-04bb906769aaeb94c5ff42988d1286618a846bceb1e627db4a2551ee38aa59d9-merged.mount: Deactivated successfully.
Oct  1 09:49:16 np0005464214 podman[282110]: 2025-10-01 13:49:16.330112339 +0000 UTC m=+0.935142376 container remove 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:49:16 np0005464214 nova_compute[260022]: 2025-10-01 13:49:16.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:16 np0005464214 systemd[1]: libpod-conmon-1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873.scope: Deactivated successfully.
Oct  1 09:49:16 np0005464214 podman[282149]: 2025-10-01 13:49:16.561842922 +0000 UTC m=+0.031911505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:49:16 np0005464214 podman[282149]: 2025-10-01 13:49:16.684481389 +0000 UTC m=+0.154549922 container create 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:49:16 np0005464214 systemd[1]: Started libpod-conmon-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope.
Oct  1 09:49:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:49:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:16 np0005464214 podman[282149]: 2025-10-01 13:49:16.946899727 +0000 UTC m=+0.416968280 container init 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:49:16 np0005464214 podman[282149]: 2025-10-01 13:49:16.954684855 +0000 UTC m=+0.424753358 container start 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:49:17 np0005464214 podman[282149]: 2025-10-01 13:49:17.079024276 +0000 UTC m=+0.549092779 container attach 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:49:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:49:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:49:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:49:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:49:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:49:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:49:18 np0005464214 jolly_beaver[282166]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:49:18 np0005464214 jolly_beaver[282166]: --> relative data size: 1.0
Oct  1 09:49:18 np0005464214 jolly_beaver[282166]: --> All data devices are unavailable
Oct  1 09:49:18 np0005464214 systemd[1]: libpod-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope: Deactivated successfully.
Oct  1 09:49:18 np0005464214 systemd[1]: libpod-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope: Consumed 1.159s CPU time.
Oct  1 09:49:18 np0005464214 podman[282149]: 2025-10-01 13:49:18.161477951 +0000 UTC m=+1.631546554 container died 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:49:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827-merged.mount: Deactivated successfully.
Oct  1 09:49:18 np0005464214 podman[282149]: 2025-10-01 13:49:18.23983532 +0000 UTC m=+1.709903843 container remove 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:49:18 np0005464214 systemd[1]: libpod-conmon-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope: Deactivated successfully.
Oct  1 09:49:18 np0005464214 nova_compute[260022]: 2025-10-01 13:49:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:49:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.106834269 +0000 UTC m=+0.075329915 container create 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:49:19 np0005464214 systemd[1]: Started libpod-conmon-96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd.scope.
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.07977945 +0000 UTC m=+0.048275206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:49:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.211109272 +0000 UTC m=+0.179604938 container init 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.223450045 +0000 UTC m=+0.191945691 container start 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.227590796 +0000 UTC m=+0.196086462 container attach 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:49:19 np0005464214 compassionate_lumiere[282365]: 167 167
Oct  1 09:49:19 np0005464214 systemd[1]: libpod-96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd.scope: Deactivated successfully.
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.231657236 +0000 UTC m=+0.200152882 container died 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:49:19 np0005464214 systemd[1]: var-lib-containers-storage-overlay-85718ed1a2e80edc2795dfa59dc557f36fc3c38c152fdc5544d6f051fb91e22b-merged.mount: Deactivated successfully.
Oct  1 09:49:19 np0005464214 podman[282348]: 2025-10-01 13:49:19.277982037 +0000 UTC m=+0.246477683 container remove 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:49:19 np0005464214 systemd[1]: libpod-conmon-96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd.scope: Deactivated successfully.
Oct  1 09:49:19 np0005464214 podman[282388]: 2025-10-01 13:49:19.518686435 +0000 UTC m=+0.056152115 container create 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:49:19 np0005464214 systemd[1]: Started libpod-conmon-44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c.scope.
Oct  1 09:49:19 np0005464214 podman[282388]: 2025-10-01 13:49:19.487897227 +0000 UTC m=+0.025362987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:49:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:49:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:19 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:19 np0005464214 podman[282388]: 2025-10-01 13:49:19.627622107 +0000 UTC m=+0.165087827 container init 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:49:19 np0005464214 podman[282388]: 2025-10-01 13:49:19.637291335 +0000 UTC m=+0.174757015 container start 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:49:19 np0005464214 podman[282388]: 2025-10-01 13:49:19.640845578 +0000 UTC m=+0.178311288 container attach 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:49:20 np0005464214 fervent_ride[282404]: {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:    "0": [
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:        {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "devices": [
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "/dev/loop3"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            ],
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_name": "ceph_lv0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_size": "21470642176",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "name": "ceph_lv0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "tags": {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cluster_name": "ceph",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.crush_device_class": "",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.encrypted": "0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osd_id": "0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.type": "block",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.vdo": "0"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            },
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "type": "block",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "vg_name": "ceph_vg0"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:        }
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:    ],
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:    "1": [
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:        {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "devices": [
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "/dev/loop4"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            ],
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_name": "ceph_lv1",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_size": "21470642176",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "name": "ceph_lv1",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "tags": {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cluster_name": "ceph",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.crush_device_class": "",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.encrypted": "0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osd_id": "1",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.type": "block",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.vdo": "0"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            },
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "type": "block",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "vg_name": "ceph_vg1"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:        }
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:    ],
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:    "2": [
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:        {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "devices": [
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "/dev/loop5"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            ],
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_name": "ceph_lv2",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_size": "21470642176",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "name": "ceph_lv2",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "tags": {
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.cluster_name": "ceph",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.crush_device_class": "",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.encrypted": "0",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osd_id": "2",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.type": "block",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:                "ceph.vdo": "0"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            },
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "type": "block",
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:            "vg_name": "ceph_vg2"
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:        }
Oct  1 09:49:20 np0005464214 fervent_ride[282404]:    ]
Oct  1 09:49:20 np0005464214 fervent_ride[282404]: }
Oct  1 09:49:20 np0005464214 systemd[1]: libpod-44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c.scope: Deactivated successfully.
Oct  1 09:49:20 np0005464214 podman[282413]: 2025-10-01 13:49:20.489401691 +0000 UTC m=+0.033275920 container died 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:49:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e-merged.mount: Deactivated successfully.
Oct  1 09:49:20 np0005464214 podman[282413]: 2025-10-01 13:49:20.566477889 +0000 UTC m=+0.110352088 container remove 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 09:49:20 np0005464214 systemd[1]: libpod-conmon-44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c.scope: Deactivated successfully.
Oct  1 09:49:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.349014904 +0000 UTC m=+0.056767424 container create 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:49:21 np0005464214 systemd[1]: Started libpod-conmon-4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115.scope.
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.322110349 +0000 UTC m=+0.029862869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:49:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.549405602 +0000 UTC m=+0.257158162 container init 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.562332102 +0000 UTC m=+0.270084612 container start 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:49:21 np0005464214 crazy_meitner[282584]: 167 167
Oct  1 09:49:21 np0005464214 systemd[1]: libpod-4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115.scope: Deactivated successfully.
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.589874127 +0000 UTC m=+0.297626697 container attach 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.590784716 +0000 UTC m=+0.298537236 container died 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:49:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b82b89fc0274aea64c00217c93d69af7cc451d5eed8d204932129a3ea95aa3e8-merged.mount: Deactivated successfully.
Oct  1 09:49:21 np0005464214 podman[282568]: 2025-10-01 13:49:21.752180004 +0000 UTC m=+0.459932524 container remove 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:49:21 np0005464214 systemd[1]: libpod-conmon-4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115.scope: Deactivated successfully.
Oct  1 09:49:21 np0005464214 podman[282608]: 2025-10-01 13:49:21.97705277 +0000 UTC m=+0.057359473 container create 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:49:22 np0005464214 systemd[1]: Started libpod-conmon-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope.
Oct  1 09:49:22 np0005464214 podman[282608]: 2025-10-01 13:49:21.958029956 +0000 UTC m=+0.038336649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:49:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:49:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:49:22 np0005464214 podman[282608]: 2025-10-01 13:49:22.083820833 +0000 UTC m=+0.164127516 container init 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:49:22 np0005464214 podman[282608]: 2025-10-01 13:49:22.092641823 +0000 UTC m=+0.172948546 container start 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:49:22 np0005464214 podman[282608]: 2025-10-01 13:49:22.097251109 +0000 UTC m=+0.177557832 container attach 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:49:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:23 np0005464214 confident_colden[282624]: {
Oct  1 09:49:23 np0005464214 confident_colden[282624]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "osd_id": 0,
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "type": "bluestore"
Oct  1 09:49:23 np0005464214 confident_colden[282624]:    },
Oct  1 09:49:23 np0005464214 confident_colden[282624]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "osd_id": 2,
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "type": "bluestore"
Oct  1 09:49:23 np0005464214 confident_colden[282624]:    },
Oct  1 09:49:23 np0005464214 confident_colden[282624]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "osd_id": 1,
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:49:23 np0005464214 confident_colden[282624]:        "type": "bluestore"
Oct  1 09:49:23 np0005464214 confident_colden[282624]:    }
Oct  1 09:49:23 np0005464214 confident_colden[282624]: }
Oct  1 09:49:23 np0005464214 systemd[1]: libpod-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope: Deactivated successfully.
Oct  1 09:49:23 np0005464214 podman[282608]: 2025-10-01 13:49:23.257430584 +0000 UTC m=+1.337737307 container died 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:49:23 np0005464214 systemd[1]: libpod-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope: Consumed 1.172s CPU time.
Oct  1 09:49:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8-merged.mount: Deactivated successfully.
Oct  1 09:49:23 np0005464214 podman[282608]: 2025-10-01 13:49:23.336158936 +0000 UTC m=+1.416465629 container remove 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:49:23 np0005464214 systemd[1]: libpod-conmon-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope: Deactivated successfully.
Oct  1 09:49:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:49:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:49:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:49:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:49:23 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ef1ddb8-05a5-462b-bdd6-669bdba31fbc does not exist
Oct  1 09:49:23 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev dc187cc4-47ac-408e-be61-2ebc9f9906eb does not exist
Oct  1 09:49:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:49:23 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:49:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:31 np0005464214 podman[282721]: 2025-10-01 13:49:31.541605253 +0000 UTC m=+0.094632507 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:49:31 np0005464214 podman[282723]: 2025-10-01 13:49:31.545492506 +0000 UTC m=+0.078545927 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:49:31 np0005464214 podman[282722]: 2025-10-01 13:49:31.554138541 +0000 UTC m=+0.102232239 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 09:49:31 np0005464214 podman[282720]: 2025-10-01 13:49:31.582688768 +0000 UTC m=+0.137118768 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.103411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572103525, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2096, "num_deletes": 254, "total_data_size": 3460344, "memory_usage": 3530416, "flush_reason": "Manual Compaction"}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572128256, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3392926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25731, "largest_seqno": 27826, "table_properties": {"data_size": 3383248, "index_size": 6172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19400, "raw_average_key_size": 20, "raw_value_size": 3364019, "raw_average_value_size": 3526, "num_data_blocks": 273, "num_entries": 954, "num_filter_entries": 954, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326350, "oldest_key_time": 1759326350, "file_creation_time": 1759326572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24911 microseconds, and 14358 cpu microseconds.
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.128336) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3392926 bytes OK
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.128369) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.130419) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.130447) EVENT_LOG_v1 {"time_micros": 1759326572130436, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.130479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3451538, prev total WAL file size 3451538, number of live WAL files 2.
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.132556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3313KB)], [59(7576KB)]
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572132655, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11151033, "oldest_snapshot_seqno": -1}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5130 keys, 9375956 bytes, temperature: kUnknown
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572194354, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9375956, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9339072, "index_size": 22950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 127375, "raw_average_key_size": 24, "raw_value_size": 9243830, "raw_average_value_size": 1801, "num_data_blocks": 948, "num_entries": 5130, "num_filter_entries": 5130, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.194761) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9375956 bytes
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.196685) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.4 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5652, records dropped: 522 output_compression: NoCompression
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.196717) EVENT_LOG_v1 {"time_micros": 1759326572196700, "job": 32, "event": "compaction_finished", "compaction_time_micros": 61802, "compaction_time_cpu_micros": 26835, "output_level": 6, "num_output_files": 1, "total_output_size": 9375956, "num_input_records": 5652, "num_output_records": 5130, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572198019, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572201142, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.132395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:49:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:34 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:49:34.410 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:49:34 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:49:34.412 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:49:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:37 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:49:37.413 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:49:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:49:47
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Oct  1 09:49:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:49:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:49:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171431646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:49:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:49:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171431646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:49:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:49:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6754 writes, 26K keys, 6754 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6754 writes, 1414 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 707 writes, 1753 keys, 707 commit groups, 1.0 writes per commit group, ingest: 0.96 MB, 0.00 MB/s#012Interval WAL: 707 writes, 319 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:49:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:49:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:49:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:49:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:50:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7951 writes, 30K keys, 7951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7951 writes, 1749 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 740 writes, 1899 keys, 740 commit groups, 1.0 writes per commit group, ingest: 1.08 MB, 0.00 MB/s#012Interval WAL: 740 writes, 319 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:50:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:02 np0005464214 podman[282802]: 2025-10-01 13:50:02.549160088 +0000 UTC m=+0.078241237 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  1 09:50:02 np0005464214 podman[282801]: 2025-10-01 13:50:02.549645514 +0000 UTC m=+0.079119705 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:50:02 np0005464214 podman[282800]: 2025-10-01 13:50:02.559071583 +0000 UTC m=+0.091797998 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:50:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:02 np0005464214 podman[282799]: 2025-10-01 13:50:02.571514029 +0000 UTC m=+0.116720750 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct  1 09:50:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:04 np0005464214 nova_compute[260022]: 2025-10-01 13:50:04.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:50:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6875 writes, 27K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6875 writes, 1441 syncs, 4.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 826 writes, 1986 keys, 826 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s#012Interval WAL: 826 writes, 369 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:50:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.377 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:50:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:50:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3838049116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.780 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:50:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.951 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.953 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.953 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:50:08 np0005464214 nova_compute[260022]: 2025-10-01 13:50:08.953 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.059 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.060 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.060 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.184 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:50:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:50:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808567131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.623 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.632 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.649 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.652 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.652 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:50:09 np0005464214 nova_compute[260022]: 2025-10-01 13:50:09.653 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:10 np0005464214 nova_compute[260022]: 2025-10-01 13:50:10.356 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:10 np0005464214 nova_compute[260022]: 2025-10-01 13:50:10.356 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 09:50:10 np0005464214 nova_compute[260022]: 2025-10-01 13:50:10.383 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 09:50:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:11 np0005464214 nova_compute[260022]: 2025-10-01 13:50:11.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:11 np0005464214 nova_compute[260022]: 2025-10-01 13:50:11.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:11 np0005464214 nova_compute[260022]: 2025-10-01 13:50:11.368 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:50:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:50:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:50:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:50:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:50:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:50:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:50:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:13 np0005464214 nova_compute[260022]: 2025-10-01 13:50:13.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:14 np0005464214 nova_compute[260022]: 2025-10-01 13:50:14.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:14 np0005464214 nova_compute[260022]: 2025-10-01 13:50:14.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:50:14 np0005464214 nova_compute[260022]: 2025-10-01 13:50:14.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:50:14 np0005464214 nova_compute[260022]: 2025-10-01 13:50:14.371 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:50:14 np0005464214 nova_compute[260022]: 2025-10-01 13:50:14.372 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:14 np0005464214 nova_compute[260022]: 2025-10-01 13:50:14.372 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 09:50:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:15 np0005464214 nova_compute[260022]: 2025-10-01 13:50:15.357 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:50:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:50:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:50:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:50:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:50:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:50:18 np0005464214 nova_compute[260022]: 2025-10-01 13:50:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:18 np0005464214 nova_compute[260022]: 2025-10-01 13:50:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:23 np0005464214 nova_compute[260022]: 2025-10-01 13:50:23.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:50:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f162a899-d0ee-470b-b465-1e41c6120560 does not exist
Oct  1 09:50:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 258f11c3-8488-4a8c-bdf0-8a921576cdc6 does not exist
Oct  1 09:50:24 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8552a23d-82c5-4160-b083-4f87e3e61b9b does not exist
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:50:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:50:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:50:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:50:25 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.285621558 +0000 UTC m=+0.056945420 container create c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:50:25 np0005464214 systemd[1]: Started libpod-conmon-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope.
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.261027127 +0000 UTC m=+0.032350999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:50:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.410187387 +0000 UTC m=+0.181511329 container init c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.424125509 +0000 UTC m=+0.195449361 container start c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.428656973 +0000 UTC m=+0.199980915 container attach c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:50:25 np0005464214 adoring_haibt[283215]: 167 167
Oct  1 09:50:25 np0005464214 systemd[1]: libpod-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope: Deactivated successfully.
Oct  1 09:50:25 np0005464214 conmon[283215]: conmon c18ebfa3a8970ee07b86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope/container/memory.events
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.434456347 +0000 UTC m=+0.205780229 container died c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:50:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-003656663cfda4701cb3d3accb726dc5d7ce18a363fe149aa05aa0701aa697c9-merged.mount: Deactivated successfully.
Oct  1 09:50:25 np0005464214 podman[283199]: 2025-10-01 13:50:25.49308826 +0000 UTC m=+0.264412112 container remove c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:50:25 np0005464214 systemd[1]: libpod-conmon-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope: Deactivated successfully.
Oct  1 09:50:25 np0005464214 podman[283241]: 2025-10-01 13:50:25.707586777 +0000 UTC m=+0.061732413 container create 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:50:25 np0005464214 systemd[1]: Started libpod-conmon-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope.
Oct  1 09:50:25 np0005464214 podman[283241]: 2025-10-01 13:50:25.67433617 +0000 UTC m=+0.028481786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:50:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:50:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:25 np0005464214 podman[283241]: 2025-10-01 13:50:25.810208857 +0000 UTC m=+0.164354543 container init 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:50:25 np0005464214 podman[283241]: 2025-10-01 13:50:25.821401593 +0000 UTC m=+0.175547229 container start 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:50:25 np0005464214 podman[283241]: 2025-10-01 13:50:25.825536405 +0000 UTC m=+0.179682011 container attach 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:50:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:27 np0005464214 interesting_antonelli[283257]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:50:27 np0005464214 interesting_antonelli[283257]: --> relative data size: 1.0
Oct  1 09:50:27 np0005464214 interesting_antonelli[283257]: --> All data devices are unavailable
Oct  1 09:50:27 np0005464214 systemd[1]: libpod-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope: Deactivated successfully.
Oct  1 09:50:27 np0005464214 podman[283241]: 2025-10-01 13:50:27.134095083 +0000 UTC m=+1.488240689 container died 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:50:27 np0005464214 systemd[1]: libpod-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope: Consumed 1.261s CPU time.
Oct  1 09:50:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899-merged.mount: Deactivated successfully.
Oct  1 09:50:27 np0005464214 podman[283241]: 2025-10-01 13:50:27.199058248 +0000 UTC m=+1.553203844 container remove 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:50:27 np0005464214 systemd[1]: libpod-conmon-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope: Deactivated successfully.
Oct  1 09:50:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.067990488 +0000 UTC m=+0.075468999 container create f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  1 09:50:28 np0005464214 systemd[1]: Started libpod-conmon-f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b.scope.
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.036516938 +0000 UTC m=+0.043995519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:50:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.179567034 +0000 UTC m=+0.187045605 container init f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.19238456 +0000 UTC m=+0.199863051 container start f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.197288416 +0000 UTC m=+0.204766987 container attach f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:50:28 np0005464214 sleepy_ritchie[283456]: 167 167
Oct  1 09:50:28 np0005464214 systemd[1]: libpod-f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b.scope: Deactivated successfully.
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.19958894 +0000 UTC m=+0.207067451 container died f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:50:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2a6f332a001e5d5268215c68572b358cf2b1153550ac5dbe0cb31bdf29b06a74-merged.mount: Deactivated successfully.
Oct  1 09:50:28 np0005464214 podman[283440]: 2025-10-01 13:50:28.253586425 +0000 UTC m=+0.261064916 container remove f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:50:28 np0005464214 systemd[1]: libpod-conmon-f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b.scope: Deactivated successfully.
Oct  1 09:50:28 np0005464214 podman[283480]: 2025-10-01 13:50:28.453915381 +0000 UTC m=+0.060121692 container create 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:50:28 np0005464214 systemd[1]: Started libpod-conmon-35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb.scope.
Oct  1 09:50:28 np0005464214 podman[283480]: 2025-10-01 13:50:28.431183359 +0000 UTC m=+0.037389710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:50:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:50:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:28 np0005464214 podman[283480]: 2025-10-01 13:50:28.578576122 +0000 UTC m=+0.184782473 container init 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:50:28 np0005464214 podman[283480]: 2025-10-01 13:50:28.595671915 +0000 UTC m=+0.201878216 container start 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:50:28 np0005464214 podman[283480]: 2025-10-01 13:50:28.599763885 +0000 UTC m=+0.205970196 container attach 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 09:50:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]: {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:    "0": [
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:        {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "devices": [
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "/dev/loop3"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            ],
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_name": "ceph_lv0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_size": "21470642176",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "name": "ceph_lv0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "tags": {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cluster_name": "ceph",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.crush_device_class": "",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.encrypted": "0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osd_id": "0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.type": "block",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.vdo": "0"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            },
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "type": "block",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "vg_name": "ceph_vg0"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:        }
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:    ],
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:    "1": [
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:        {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "devices": [
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "/dev/loop4"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            ],
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_name": "ceph_lv1",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_size": "21470642176",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "name": "ceph_lv1",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "tags": {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cluster_name": "ceph",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.crush_device_class": "",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.encrypted": "0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osd_id": "1",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.type": "block",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.vdo": "0"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            },
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "type": "block",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "vg_name": "ceph_vg1"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:        }
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:    ],
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:    "2": [
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:        {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "devices": [
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "/dev/loop5"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            ],
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_name": "ceph_lv2",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_size": "21470642176",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "name": "ceph_lv2",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "tags": {
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.cluster_name": "ceph",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.crush_device_class": "",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.encrypted": "0",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osd_id": "2",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.type": "block",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:                "ceph.vdo": "0"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            },
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "type": "block",
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:            "vg_name": "ceph_vg2"
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:        }
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]:    ]
Oct  1 09:50:29 np0005464214 dazzling_wilson[283496]: }
Oct  1 09:50:29 np0005464214 systemd[1]: libpod-35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb.scope: Deactivated successfully.
Oct  1 09:50:29 np0005464214 podman[283480]: 2025-10-01 13:50:29.430013866 +0000 UTC m=+1.036220147 container died 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:50:29 np0005464214 systemd[1]: var-lib-containers-storage-overlay-41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd-merged.mount: Deactivated successfully.
Oct  1 09:50:29 np0005464214 podman[283480]: 2025-10-01 13:50:29.503055917 +0000 UTC m=+1.109262228 container remove 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:50:29 np0005464214 systemd[1]: libpod-conmon-35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb.scope: Deactivated successfully.
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.391798537 +0000 UTC m=+0.046285912 container create eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:50:30 np0005464214 systemd[1]: Started libpod-conmon-eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968.scope.
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.369842969 +0000 UTC m=+0.024330354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:50:30 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.486680792 +0000 UTC m=+0.141168237 container init eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.497592499 +0000 UTC m=+0.152079884 container start eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.501375988 +0000 UTC m=+0.155863433 container attach eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:50:30 np0005464214 relaxed_wescoff[283674]: 167 167
Oct  1 09:50:30 np0005464214 systemd[1]: libpod-eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968.scope: Deactivated successfully.
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.505064056 +0000 UTC m=+0.159551401 container died eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:50:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1a652e50afd4554dae37605972719d1a159d635f93235325a50a6c305d6f3b26-merged.mount: Deactivated successfully.
Oct  1 09:50:30 np0005464214 podman[283658]: 2025-10-01 13:50:30.545891634 +0000 UTC m=+0.200378979 container remove eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:50:30 np0005464214 systemd[1]: libpod-conmon-eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968.scope: Deactivated successfully.
Oct  1 09:50:30 np0005464214 podman[283698]: 2025-10-01 13:50:30.733454173 +0000 UTC m=+0.048250734 container create a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:50:30 np0005464214 systemd[1]: Started libpod-conmon-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope.
Oct  1 09:50:30 np0005464214 podman[283698]: 2025-10-01 13:50:30.711611899 +0000 UTC m=+0.026408450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:50:30 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:50:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:30 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:50:30 np0005464214 podman[283698]: 2025-10-01 13:50:30.846250037 +0000 UTC m=+0.161046598 container init a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:50:30 np0005464214 podman[283698]: 2025-10-01 13:50:30.860959875 +0000 UTC m=+0.175756406 container start a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 09:50:30 np0005464214 podman[283698]: 2025-10-01 13:50:30.864238089 +0000 UTC m=+0.179034620 container attach a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:50:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]: {
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "osd_id": 0,
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "type": "bluestore"
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:    },
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "osd_id": 2,
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "type": "bluestore"
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:    },
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "osd_id": 1,
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:        "type": "bluestore"
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]:    }
Oct  1 09:50:32 np0005464214 xenodochial_mahavira[283714]: }
Oct  1 09:50:32 np0005464214 systemd[1]: libpod-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope: Deactivated successfully.
Oct  1 09:50:32 np0005464214 podman[283698]: 2025-10-01 13:50:32.026842601 +0000 UTC m=+1.341639152 container died a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:50:32 np0005464214 systemd[1]: libpod-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope: Consumed 1.176s CPU time.
Oct  1 09:50:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc-merged.mount: Deactivated successfully.
Oct  1 09:50:32 np0005464214 podman[283698]: 2025-10-01 13:50:32.105985145 +0000 UTC m=+1.420781706 container remove a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 09:50:32 np0005464214 systemd[1]: libpod-conmon-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope: Deactivated successfully.
Oct  1 09:50:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:50:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:50:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:50:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:50:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev dcbdc4dd-a9fd-4d68-a026-0cbb44f02b4b does not exist
Oct  1 09:50:32 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b61d6677-ebc6-406d-acee-7256aee51707 does not exist
Oct  1 09:50:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:50:33 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:50:33 np0005464214 podman[283814]: 2025-10-01 13:50:33.522314718 +0000 UTC m=+0.074438306 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:50:33 np0005464214 podman[283813]: 2025-10-01 13:50:33.522173873 +0000 UTC m=+0.074086704 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct  1 09:50:33 np0005464214 podman[283815]: 2025-10-01 13:50:33.537889713 +0000 UTC m=+0.083225916 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 09:50:33 np0005464214 podman[283812]: 2025-10-01 13:50:33.619482496 +0000 UTC m=+0.172054498 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 09:50:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:50:39.839 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:50:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:50:39.841 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:50:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:50:45.843 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:50:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:50:47
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'backups', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct  1 09:50:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:50:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:50:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1191837026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:50:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:50:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1191837026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:50:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:50:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:50:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:50:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:04 np0005464214 nova_compute[260022]: 2025-10-01 13:51:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:04 np0005464214 podman[283907]: 2025-10-01 13:51:04.536040256 +0000 UTC m=+0.061814605 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:51:04 np0005464214 podman[283895]: 2025-10-01 13:51:04.53613763 +0000 UTC m=+0.082780431 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd)
Oct  1 09:51:04 np0005464214 podman[283894]: 2025-10-01 13:51:04.553180071 +0000 UTC m=+0.102924671 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 09:51:04 np0005464214 podman[283896]: 2025-10-01 13:51:04.579825158 +0000 UTC m=+0.111453363 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:51:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.402 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.403 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.403 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.403 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.404 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:51:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:51:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685868125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:51:09 np0005464214 nova_compute[260022]: 2025-10-01 13:51:09.833 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.013 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.015 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.015 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.016 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.347 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.347 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.348 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.368 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.482 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.482 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.517 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.535 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 09:51:10 np0005464214 nova_compute[260022]: 2025-10-01 13:51:10.565 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:51:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  1 09:51:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:51:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134761460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:51:11 np0005464214 nova_compute[260022]: 2025-10-01 13:51:11.011 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:51:11 np0005464214 nova_compute[260022]: 2025-10-01 13:51:11.016 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:51:11 np0005464214 nova_compute[260022]: 2025-10-01 13:51:11.129 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:51:11 np0005464214 nova_compute[260022]: 2025-10-01 13:51:11.132 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:51:11 np0005464214 nova_compute[260022]: 2025-10-01 13:51:11.132 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:51:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:51:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:51:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:51:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:51:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:51:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.582631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672582674, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1024, "num_deletes": 250, "total_data_size": 1466643, "memory_usage": 1495640, "flush_reason": "Manual Compaction"}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672735325, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 874397, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27827, "largest_seqno": 28850, "table_properties": {"data_size": 870507, "index_size": 1542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10356, "raw_average_key_size": 20, "raw_value_size": 862091, "raw_average_value_size": 1717, "num_data_blocks": 70, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326573, "oldest_key_time": 1759326573, "file_creation_time": 1759326672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 152760 microseconds, and 3466 cpu microseconds.
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.735396) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 874397 bytes OK
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.735425) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.748367) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.748395) EVENT_LOG_v1 {"time_micros": 1759326672748386, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.748422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1461841, prev total WAL file size 1461841, number of live WAL files 2.
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.749615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(853KB)], [62(9156KB)]
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672749666, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10250353, "oldest_snapshot_seqno": -1}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5164 keys, 7529128 bytes, temperature: kUnknown
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672863985, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7529128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7495574, "index_size": 19556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 128218, "raw_average_key_size": 24, "raw_value_size": 7403193, "raw_average_value_size": 1433, "num_data_blocks": 810, "num_entries": 5164, "num_filter_entries": 5164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.864345) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7529128 bytes
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.865890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.5 rd, 65.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(20.3) write-amplify(8.6) OK, records in: 5632, records dropped: 468 output_compression: NoCompression
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.865913) EVENT_LOG_v1 {"time_micros": 1759326672865902, "job": 34, "event": "compaction_finished", "compaction_time_micros": 114473, "compaction_time_cpu_micros": 34474, "output_level": 6, "num_output_files": 1, "total_output_size": 7529128, "num_input_records": 5632, "num_output_records": 5164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672866232, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672868467, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.749463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:51:12 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:51:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:51:13 np0005464214 nova_compute[260022]: 2025-10-01 13:51:13.134 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:13 np0005464214 nova_compute[260022]: 2025-10-01 13:51:13.135 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:51:13 np0005464214 nova_compute[260022]: 2025-10-01 13:51:13.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:13 np0005464214 nova_compute[260022]: 2025-10-01 13:51:13.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:14 np0005464214 nova_compute[260022]: 2025-10-01 13:51:14.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:51:15 np0005464214 nova_compute[260022]: 2025-10-01 13:51:15.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:16 np0005464214 nova_compute[260022]: 2025-10-01 13:51:16.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:16 np0005464214 nova_compute[260022]: 2025-10-01 13:51:16.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:51:16 np0005464214 nova_compute[260022]: 2025-10-01 13:51:16.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:51:16 np0005464214 nova_compute[260022]: 2025-10-01 13:51:16.374 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:51:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:51:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:51:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:51:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:51:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:51:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:51:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:51:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 09:51:20 np0005464214 nova_compute[260022]: 2025-10-01 13:51:20.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:20 np0005464214 nova_compute[260022]: 2025-10-01 13:51:20.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:51:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:51:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  1 09:51:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:51:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 00ed4a34-4ead-4fef-9705-6c6c18ee4b13 does not exist
Oct  1 09:51:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2e934731-a173-4b6f-be5b-e08f3cd48b72 does not exist
Oct  1 09:51:33 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1682542a-6cd6-4a8e-becf-afe2f83c2db9 does not exist
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:51:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:51:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:51:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:51:34 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.179171805 +0000 UTC m=+0.047965355 container create 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:51:34 np0005464214 systemd[1]: Started libpod-conmon-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope.
Oct  1 09:51:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.157263369 +0000 UTC m=+0.026056949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.274497044 +0000 UTC m=+0.143290674 container init 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.286555787 +0000 UTC m=+0.155349347 container start 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.290866694 +0000 UTC m=+0.159660274 container attach 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:51:34 np0005464214 gracious_dirac[284304]: 167 167
Oct  1 09:51:34 np0005464214 systemd[1]: libpod-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope: Deactivated successfully.
Oct  1 09:51:34 np0005464214 conmon[284304]: conmon 40c70cd9599215b085d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope/container/memory.events
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.296898946 +0000 UTC m=+0.165692536 container died 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 09:51:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-47e7934eb05872890f02f5aa9663130eadfff80c5388599286459a81bc6df5ab-merged.mount: Deactivated successfully.
Oct  1 09:51:34 np0005464214 podman[284287]: 2025-10-01 13:51:34.359513815 +0000 UTC m=+0.228307405 container remove 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  1 09:51:34 np0005464214 systemd[1]: libpod-conmon-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope: Deactivated successfully.
Oct  1 09:51:34 np0005464214 podman[284328]: 2025-10-01 13:51:34.57241003 +0000 UTC m=+0.072472244 container create c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:51:34 np0005464214 systemd[1]: Started libpod-conmon-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope.
Oct  1 09:51:34 np0005464214 podman[284328]: 2025-10-01 13:51:34.545682651 +0000 UTC m=+0.045744965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:51:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:51:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:34 np0005464214 podman[284328]: 2025-10-01 13:51:34.675819825 +0000 UTC m=+0.175882099 container init c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:51:34 np0005464214 podman[284328]: 2025-10-01 13:51:34.686451174 +0000 UTC m=+0.186513418 container start c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:51:34 np0005464214 podman[284328]: 2025-10-01 13:51:34.690774101 +0000 UTC m=+0.190836355 container attach c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:51:34 np0005464214 podman[284345]: 2025-10-01 13:51:34.734776229 +0000 UTC m=+0.104713628 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:51:34 np0005464214 podman[284346]: 2025-10-01 13:51:34.734625214 +0000 UTC m=+0.099786422 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct  1 09:51:34 np0005464214 podman[284348]: 2025-10-01 13:51:34.753716611 +0000 UTC m=+0.117736602 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 09:51:34 np0005464214 podman[284342]: 2025-10-01 13:51:34.76186191 +0000 UTC m=+0.131695146 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct  1 09:51:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:35 np0005464214 sad_bell[284347]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:51:35 np0005464214 sad_bell[284347]: --> relative data size: 1.0
Oct  1 09:51:35 np0005464214 sad_bell[284347]: --> All data devices are unavailable
Oct  1 09:51:35 np0005464214 systemd[1]: libpod-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope: Deactivated successfully.
Oct  1 09:51:35 np0005464214 systemd[1]: libpod-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope: Consumed 1.238s CPU time.
Oct  1 09:51:35 np0005464214 podman[284328]: 2025-10-01 13:51:35.966103765 +0000 UTC m=+1.466166019 container died c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:51:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89-merged.mount: Deactivated successfully.
Oct  1 09:51:36 np0005464214 podman[284328]: 2025-10-01 13:51:36.034744845 +0000 UTC m=+1.534807059 container remove c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:51:36 np0005464214 systemd[1]: libpod-conmon-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope: Deactivated successfully.
Oct  1 09:51:36 np0005464214 podman[284607]: 2025-10-01 13:51:36.93591177 +0000 UTC m=+0.068472876 container create c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:51:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:36 np0005464214 systemd[1]: Started libpod-conmon-c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff.scope.
Oct  1 09:51:37 np0005464214 podman[284607]: 2025-10-01 13:51:36.907572199 +0000 UTC m=+0.040133365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:51:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:51:37 np0005464214 podman[284607]: 2025-10-01 13:51:37.039221382 +0000 UTC m=+0.171782548 container init c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:51:37 np0005464214 podman[284607]: 2025-10-01 13:51:37.050450799 +0000 UTC m=+0.183011915 container start c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:51:37 np0005464214 podman[284607]: 2025-10-01 13:51:37.054660373 +0000 UTC m=+0.187221489 container attach c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:51:37 np0005464214 peaceful_antonelli[284623]: 167 167
Oct  1 09:51:37 np0005464214 systemd[1]: libpod-c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff.scope: Deactivated successfully.
Oct  1 09:51:37 np0005464214 podman[284607]: 2025-10-01 13:51:37.060039994 +0000 UTC m=+0.192601140 container died c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:51:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1165e90d372ad06ca503a2765e2ce4c1b41f7b1d3a0c0b6189ada5b1cfcb089d-merged.mount: Deactivated successfully.
Oct  1 09:51:37 np0005464214 podman[284607]: 2025-10-01 13:51:37.152136911 +0000 UTC m=+0.284697987 container remove c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:51:37 np0005464214 systemd[1]: libpod-conmon-c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff.scope: Deactivated successfully.
Oct  1 09:51:37 np0005464214 podman[284645]: 2025-10-01 13:51:37.365876293 +0000 UTC m=+0.068262241 container create e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:51:37 np0005464214 systemd[1]: Started libpod-conmon-e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed.scope.
Oct  1 09:51:37 np0005464214 podman[284645]: 2025-10-01 13:51:37.337132669 +0000 UTC m=+0.039518677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:51:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:51:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:37 np0005464214 podman[284645]: 2025-10-01 13:51:37.478335615 +0000 UTC m=+0.180721593 container init e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:51:37 np0005464214 podman[284645]: 2025-10-01 13:51:37.491707201 +0000 UTC m=+0.194093149 container start e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:51:37 np0005464214 podman[284645]: 2025-10-01 13:51:37.495517321 +0000 UTC m=+0.197903289 container attach e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:51:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]: {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:    "0": [
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:        {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "devices": [
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "/dev/loop3"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            ],
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_name": "ceph_lv0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_size": "21470642176",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "name": "ceph_lv0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "tags": {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cluster_name": "ceph",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.crush_device_class": "",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.encrypted": "0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osd_id": "0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.type": "block",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.vdo": "0"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            },
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "type": "block",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "vg_name": "ceph_vg0"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:        }
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:    ],
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:    "1": [
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:        {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "devices": [
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "/dev/loop4"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            ],
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_name": "ceph_lv1",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_size": "21470642176",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "name": "ceph_lv1",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "tags": {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cluster_name": "ceph",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.crush_device_class": "",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.encrypted": "0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osd_id": "1",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.type": "block",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.vdo": "0"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            },
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "type": "block",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "vg_name": "ceph_vg1"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:        }
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:    ],
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:    "2": [
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:        {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "devices": [
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "/dev/loop5"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            ],
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_name": "ceph_lv2",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_size": "21470642176",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "name": "ceph_lv2",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "tags": {
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.cluster_name": "ceph",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.crush_device_class": "",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.encrypted": "0",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osd_id": "2",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.type": "block",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:                "ceph.vdo": "0"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            },
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "type": "block",
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:            "vg_name": "ceph_vg2"
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:        }
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]:    ]
Oct  1 09:51:38 np0005464214 eloquent_elgamal[284661]: }
Oct  1 09:51:38 np0005464214 systemd[1]: libpod-e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed.scope: Deactivated successfully.
Oct  1 09:51:38 np0005464214 podman[284645]: 2025-10-01 13:51:38.273542713 +0000 UTC m=+0.975928671 container died e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:51:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07-merged.mount: Deactivated successfully.
Oct  1 09:51:38 np0005464214 podman[284645]: 2025-10-01 13:51:38.362071576 +0000 UTC m=+1.064457534 container remove e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 09:51:38 np0005464214 systemd[1]: libpod-conmon-e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed.scope: Deactivated successfully.
Oct  1 09:51:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.268426466 +0000 UTC m=+0.051970862 container create 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:51:39 np0005464214 systemd[1]: Starting dnf makecache...
Oct  1 09:51:39 np0005464214 systemd[1]: Started libpod-conmon-2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768.scope.
Oct  1 09:51:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.24432943 +0000 UTC m=+0.027873866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.362619788 +0000 UTC m=+0.146164244 container init 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.372443761 +0000 UTC m=+0.155988157 container start 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.377047787 +0000 UTC m=+0.160592253 container attach 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:51:39 np0005464214 crazy_mclean[284841]: 167 167
Oct  1 09:51:39 np0005464214 systemd[1]: libpod-2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768.scope: Deactivated successfully.
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.38312125 +0000 UTC m=+0.166665666 container died 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 09:51:39 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a1db7145109f903f47cff74ce17fce6062b69cb1f19d969b3b846a08daa8e3d2-merged.mount: Deactivated successfully.
Oct  1 09:51:39 np0005464214 podman[284824]: 2025-10-01 13:51:39.429081891 +0000 UTC m=+0.212626247 container remove 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:51:39 np0005464214 systemd[1]: libpod-conmon-2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768.scope: Deactivated successfully.
Oct  1 09:51:39 np0005464214 dnf[284838]: Metadata cache refreshed recently.
Oct  1 09:51:39 np0005464214 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  1 09:51:39 np0005464214 systemd[1]: Finished dnf makecache.
Oct  1 09:51:39 np0005464214 podman[284864]: 2025-10-01 13:51:39.607578572 +0000 UTC m=+0.060118851 container create 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 09:51:39 np0005464214 systemd[1]: Started libpod-conmon-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope.
Oct  1 09:51:39 np0005464214 podman[284864]: 2025-10-01 13:51:39.576572667 +0000 UTC m=+0.029112956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:51:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:51:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:51:39 np0005464214 podman[284864]: 2025-10-01 13:51:39.692103858 +0000 UTC m=+0.144644147 container init 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:51:39 np0005464214 podman[284864]: 2025-10-01 13:51:39.704061297 +0000 UTC m=+0.156601566 container start 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:51:39 np0005464214 podman[284864]: 2025-10-01 13:51:39.707646162 +0000 UTC m=+0.160186431 container attach 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]: {
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "osd_id": 0,
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "type": "bluestore"
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:    },
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "osd_id": 2,
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "type": "bluestore"
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:    },
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "osd_id": 1,
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:        "type": "bluestore"
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]:    }
Oct  1 09:51:40 np0005464214 priceless_lewin[284880]: }
Oct  1 09:51:40 np0005464214 systemd[1]: libpod-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope: Deactivated successfully.
Oct  1 09:51:40 np0005464214 podman[284864]: 2025-10-01 13:51:40.809943817 +0000 UTC m=+1.262484096 container died 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:51:40 np0005464214 systemd[1]: libpod-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope: Consumed 1.116s CPU time.
Oct  1 09:51:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb-merged.mount: Deactivated successfully.
Oct  1 09:51:40 np0005464214 podman[284864]: 2025-10-01 13:51:40.873152216 +0000 UTC m=+1.325692455 container remove 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:51:40 np0005464214 systemd[1]: libpod-conmon-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope: Deactivated successfully.
Oct  1 09:51:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:51:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:51:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:51:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:51:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 41556639-a1de-4205-9fdb-4dd58e21ed4e does not exist
Oct  1 09:51:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2bf69889-189f-4359-a1d2-40b0fcf422ca does not exist
Oct  1 09:51:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:41 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:51:41.525 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:51:41 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:51:41.529 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:51:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:51:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:51:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:51:47
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control']
Oct  1 09:51:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:51:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:49 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:51:49.531 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:51:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:51:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4181186576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:51:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:51:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4181186576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:51:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:51:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:51:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:51:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:04 np0005464214 nova_compute[260022]: 2025-10-01 13:52:04.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:05 np0005464214 podman[284979]: 2025-10-01 13:52:05.555644242 +0000 UTC m=+0.088663738 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  1 09:52:05 np0005464214 podman[284978]: 2025-10-01 13:52:05.566877729 +0000 UTC m=+0.103978425 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:52:05 np0005464214 podman[284983]: 2025-10-01 13:52:05.568105908 +0000 UTC m=+0.092222241 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  1 09:52:05 np0005464214 podman[284977]: 2025-10-01 13:52:05.591438279 +0000 UTC m=+0.138210722 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 09:52:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.393 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.395 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:52:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:52:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712404413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:52:11 np0005464214 nova_compute[260022]: 2025-10-01 13:52:11.825 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.025 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.026 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.027 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.027 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.138 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.138 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.139 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.179 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:52:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:52:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:52:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:52:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:52:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:52:12.320 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:52:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:52:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413603914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.672 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.681 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.700 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.703 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:52:12 np0005464214 nova_compute[260022]: 2025-10-01 13:52:12.704 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:52:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:14 np0005464214 nova_compute[260022]: 2025-10-01 13:52:14.705 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:14 np0005464214 nova_compute[260022]: 2025-10-01 13:52:14.705 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:14 np0005464214 nova_compute[260022]: 2025-10-01 13:52:14.706 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:14 np0005464214 nova_compute[260022]: 2025-10-01 13:52:14.706 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:52:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:17 np0005464214 nova_compute[260022]: 2025-10-01 13:52:17.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:52:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:52:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:52:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:52:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:52:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:52:18 np0005464214 nova_compute[260022]: 2025-10-01 13:52:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:18 np0005464214 nova_compute[260022]: 2025-10-01 13:52:18.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:52:18 np0005464214 nova_compute[260022]: 2025-10-01 13:52:18.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:52:18 np0005464214 nova_compute[260022]: 2025-10-01 13:52:18.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:52:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:21 np0005464214 nova_compute[260022]: 2025-10-01 13:52:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:22 np0005464214 nova_compute[260022]: 2025-10-01 13:52:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:24 np0005464214 nova_compute[260022]: 2025-10-01 13:52:24.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:52:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:36 np0005464214 podman[285104]: 2025-10-01 13:52:36.519168168 +0000 UTC m=+0.068164736 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:52:36 np0005464214 podman[285102]: 2025-10-01 13:52:36.537127509 +0000 UTC m=+0.093312826 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:52:36 np0005464214 podman[285103]: 2025-10-01 13:52:36.539635629 +0000 UTC m=+0.082525244 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd)
Oct  1 09:52:36 np0005464214 podman[285105]: 2025-10-01 13:52:36.547369234 +0000 UTC m=+0.092337085 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  1 09:52:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:52:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:52:41 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:42 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:52:42.667 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:52:42 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:52:42.669 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 85bfe63c-5304-48f8-af21-367e2f011047 does not exist
Oct  1 09:52:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1a40df61-6d0d-4fc2-baa1-12cab0a983b0 does not exist
Oct  1 09:52:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6484a043-454b-431f-a51d-6b408b647fdd does not exist
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:52:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:52:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 09:52:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:52:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:43 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.509199376 +0000 UTC m=+0.062934581 container create 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.4743727 +0000 UTC m=+0.028107945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:52:43 np0005464214 systemd[1]: Started libpod-conmon-273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e.scope.
Oct  1 09:52:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.65912169 +0000 UTC m=+0.212856945 container init 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.672674791 +0000 UTC m=+0.226409996 container start 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:52:43 np0005464214 interesting_ganguly[285595]: 167 167
Oct  1 09:52:43 np0005464214 systemd[1]: libpod-273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e.scope: Deactivated successfully.
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.690997462 +0000 UTC m=+0.244732717 container attach 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.692858862 +0000 UTC m=+0.246594067 container died 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:52:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8f8998d877eef755af9c10102488ea4e60be908b9c9b07119c8da46cd2195707-merged.mount: Deactivated successfully.
Oct  1 09:52:43 np0005464214 podman[285578]: 2025-10-01 13:52:43.870388542 +0000 UTC m=+0.424123747 container remove 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:52:43 np0005464214 systemd[1]: libpod-conmon-273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e.scope: Deactivated successfully.
Oct  1 09:52:44 np0005464214 podman[285621]: 2025-10-01 13:52:44.133936247 +0000 UTC m=+0.068158206 container create 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:52:44 np0005464214 podman[285621]: 2025-10-01 13:52:44.104179212 +0000 UTC m=+0.038401221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:52:44 np0005464214 systemd[1]: Started libpod-conmon-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope.
Oct  1 09:52:44 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:52:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:44 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:44 np0005464214 podman[285621]: 2025-10-01 13:52:44.301678777 +0000 UTC m=+0.235900716 container init 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:52:44 np0005464214 podman[285621]: 2025-10-01 13:52:44.31343068 +0000 UTC m=+0.247652629 container start 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:52:44 np0005464214 podman[285621]: 2025-10-01 13:52:44.328577502 +0000 UTC m=+0.262799461 container attach 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:52:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:45 np0005464214 wonderful_stonebraker[285638]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:52:45 np0005464214 wonderful_stonebraker[285638]: --> relative data size: 1.0
Oct  1 09:52:45 np0005464214 wonderful_stonebraker[285638]: --> All data devices are unavailable
Oct  1 09:52:45 np0005464214 systemd[1]: libpod-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope: Deactivated successfully.
Oct  1 09:52:45 np0005464214 podman[285621]: 2025-10-01 13:52:45.51527472 +0000 UTC m=+1.449496729 container died 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:52:45 np0005464214 systemd[1]: libpod-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope: Consumed 1.142s CPU time.
Oct  1 09:52:45 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4-merged.mount: Deactivated successfully.
Oct  1 09:52:45 np0005464214 podman[285621]: 2025-10-01 13:52:45.608303806 +0000 UTC m=+1.542525725 container remove 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:52:45 np0005464214 systemd[1]: libpod-conmon-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope: Deactivated successfully.
Oct  1 09:52:46 np0005464214 podman[285821]: 2025-10-01 13:52:46.258597108 +0000 UTC m=+0.063566780 container create d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:52:46 np0005464214 systemd[1]: Started libpod-conmon-d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41.scope.
Oct  1 09:52:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct  1 09:52:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct  1 09:52:46 np0005464214 podman[285821]: 2025-10-01 13:52:46.230892258 +0000 UTC m=+0.035861980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:52:46 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct  1 09:52:46 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:52:46 np0005464214 podman[285821]: 2025-10-01 13:52:46.359480244 +0000 UTC m=+0.164449896 container init d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  1 09:52:46 np0005464214 podman[285821]: 2025-10-01 13:52:46.368967086 +0000 UTC m=+0.173936738 container start d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:46 np0005464214 podman[285821]: 2025-10-01 13:52:46.372702255 +0000 UTC m=+0.177671927 container attach d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:46 np0005464214 mystifying_bassi[285837]: 167 167
Oct  1 09:52:46 np0005464214 systemd[1]: libpod-d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41.scope: Deactivated successfully.
Oct  1 09:52:46 np0005464214 podman[285842]: 2025-10-01 13:52:46.451006313 +0000 UTC m=+0.045317971 container died d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:52:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3cfadb01dd6bf224aecec6ccd1f3d807073f9a518ac57795aabad7cd5830f937-merged.mount: Deactivated successfully.
Oct  1 09:52:46 np0005464214 podman[285842]: 2025-10-01 13:52:46.494250947 +0000 UTC m=+0.088562635 container remove d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:46 np0005464214 systemd[1]: libpod-conmon-d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41.scope: Deactivated successfully.
Oct  1 09:52:46 np0005464214 podman[285864]: 2025-10-01 13:52:46.763780871 +0000 UTC m=+0.073580159 container create b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:52:46 np0005464214 systemd[1]: Started libpod-conmon-b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53.scope.
Oct  1 09:52:46 np0005464214 podman[285864]: 2025-10-01 13:52:46.735571474 +0000 UTC m=+0.045370802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:52:46 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:52:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:46 np0005464214 podman[285864]: 2025-10-01 13:52:46.880723567 +0000 UTC m=+0.190522895 container init b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:52:46 np0005464214 podman[285864]: 2025-10-01 13:52:46.894313208 +0000 UTC m=+0.204112456 container start b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 09:52:46 np0005464214 podman[285864]: 2025-10-01 13:52:46.898684618 +0000 UTC m=+0.208483966 container attach b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:52:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:52:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct  1 09:52:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct  1 09:52:47 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct  1 09:52:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]: {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:    "0": [
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:        {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "devices": [
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "/dev/loop3"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            ],
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_name": "ceph_lv0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_size": "21470642176",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "name": "ceph_lv0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "tags": {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cluster_name": "ceph",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.crush_device_class": "",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.encrypted": "0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osd_id": "0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.type": "block",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.vdo": "0"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            },
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "type": "block",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "vg_name": "ceph_vg0"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:        }
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:    ],
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:    "1": [
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:        {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "devices": [
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "/dev/loop4"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            ],
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_name": "ceph_lv1",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_size": "21470642176",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "name": "ceph_lv1",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "tags": {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cluster_name": "ceph",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.crush_device_class": "",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.encrypted": "0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osd_id": "1",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.type": "block",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.vdo": "0"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            },
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "type": "block",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "vg_name": "ceph_vg1"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:        }
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:    ],
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:    "2": [
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:        {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "devices": [
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "/dev/loop5"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            ],
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_name": "ceph_lv2",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_size": "21470642176",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "name": "ceph_lv2",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "tags": {
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.cluster_name": "ceph",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.crush_device_class": "",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.encrypted": "0",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osd_id": "2",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.type": "block",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:                "ceph.vdo": "0"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            },
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "type": "block",
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:            "vg_name": "ceph_vg2"
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:        }
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]:    ]
Oct  1 09:52:47 np0005464214 fervent_kirch[285880]: }
Oct  1 09:52:47 np0005464214 systemd[1]: libpod-b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53.scope: Deactivated successfully.
Oct  1 09:52:47 np0005464214 podman[285889]: 2025-10-01 13:52:47.821724197 +0000 UTC m=+0.045024732 container died b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:52:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a-merged.mount: Deactivated successfully.
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:52:47
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', '.mgr']
Oct  1 09:52:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:52:47 np0005464214 podman[285889]: 2025-10-01 13:52:47.892057262 +0000 UTC m=+0.115357747 container remove b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:52:47 np0005464214 systemd[1]: libpod-conmon-b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53.scope: Deactivated successfully.
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:52:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct  1 09:52:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct  1 09:52:48 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct  1 09:52:48 np0005464214 podman[286044]: 2025-10-01 13:52:48.863114377 +0000 UTC m=+0.070012396 container create f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 09:52:48 np0005464214 systemd[1]: Started libpod-conmon-f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195.scope.
Oct  1 09:52:48 np0005464214 podman[286044]: 2025-10-01 13:52:48.832146363 +0000 UTC m=+0.039044432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:52:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:52:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.5 KiB/s wr, 49 op/s
Oct  1 09:52:48 np0005464214 podman[286044]: 2025-10-01 13:52:48.981214319 +0000 UTC m=+0.188112378 container init f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:52:48 np0005464214 podman[286044]: 2025-10-01 13:52:48.995625047 +0000 UTC m=+0.202523066 container start f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:48 np0005464214 podman[286044]: 2025-10-01 13:52:48.999604693 +0000 UTC m=+0.206502712 container attach f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:49 np0005464214 awesome_dewdney[286060]: 167 167
Oct  1 09:52:49 np0005464214 systemd[1]: libpod-f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195.scope: Deactivated successfully.
Oct  1 09:52:49 np0005464214 podman[286065]: 2025-10-01 13:52:49.079076899 +0000 UTC m=+0.043067140 container died f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fab645f5e88504c26179caff72aa031298b1e10989adf8d7b08d724df7ea35d5-merged.mount: Deactivated successfully.
Oct  1 09:52:49 np0005464214 podman[286065]: 2025-10-01 13:52:49.120109842 +0000 UTC m=+0.084100073 container remove f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 09:52:49 np0005464214 systemd[1]: libpod-conmon-f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195.scope: Deactivated successfully.
Oct  1 09:52:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct  1 09:52:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct  1 09:52:49 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct  1 09:52:49 np0005464214 podman[286088]: 2025-10-01 13:52:49.376936223 +0000 UTC m=+0.058009784 container create e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 09:52:49 np0005464214 systemd[1]: Started libpod-conmon-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope.
Oct  1 09:52:49 np0005464214 podman[286088]: 2025-10-01 13:52:49.352847748 +0000 UTC m=+0.033921409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:52:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:52:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:52:49 np0005464214 podman[286088]: 2025-10-01 13:52:49.487808566 +0000 UTC m=+0.168882207 container init e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:52:49 np0005464214 podman[286088]: 2025-10-01 13:52:49.501906844 +0000 UTC m=+0.182980435 container start e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:52:49 np0005464214 podman[286088]: 2025-10-01 13:52:49.506524351 +0000 UTC m=+0.187597952 container attach e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]: {
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "osd_id": 0,
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "type": "bluestore"
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:    },
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "osd_id": 2,
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "type": "bluestore"
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:    },
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "osd_id": 1,
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:        "type": "bluestore"
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]:    }
Oct  1 09:52:50 np0005464214 amazing_leakey[286104]: }
Oct  1 09:52:50 np0005464214 systemd[1]: libpod-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope: Deactivated successfully.
Oct  1 09:52:50 np0005464214 systemd[1]: libpod-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope: Consumed 1.018s CPU time.
Oct  1 09:52:50 np0005464214 podman[286088]: 2025-10-01 13:52:50.518445265 +0000 UTC m=+1.199518826 container died e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:52:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8-merged.mount: Deactivated successfully.
Oct  1 09:52:50 np0005464214 podman[286088]: 2025-10-01 13:52:50.581659013 +0000 UTC m=+1.262732594 container remove e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:52:50 np0005464214 systemd[1]: libpod-conmon-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope: Deactivated successfully.
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:52:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 11964950-c27b-4ac7-bd1c-a70c7b0928af does not exist
Oct  1 09:52:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 40ea15a9-6bc6-44b4-8bd4-2984cce723c8 does not exist
Oct  1 09:52:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 8.2 KiB/s wr, 74 op/s
Oct  1 09:52:51 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:51 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:52:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct  1 09:52:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct  1 09:52:51 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct  1 09:52:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:52 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:52:52.671 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:52:52 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 16 KiB/s wr, 130 op/s
Oct  1 09:52:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct  1 09:52:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct  1 09:52:53 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct  1 09:52:54 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 107 op/s
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981758943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981758943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct  1 09:52:55 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct  1 09:52:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 09:52:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct  1 09:52:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct  1 09:52:56 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct  1 09:52:56 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 14 KiB/s wr, 113 op/s
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667638841407827 of space, bias 1.0, pg target 0.2002916524223481 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:52:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:52:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:52:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct  1 09:52:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct  1 09:52:57 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct  1 09:52:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct  1 09:52:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct  1 09:52:58 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct  1 09:52:58 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 28 MiB/s wr, 268 op/s
Oct  1 09:52:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct  1 09:52:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct  1 09:52:59 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct  1 09:53:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct  1 09:53:00 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct  1 09:53:00 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct  1 09:53:00 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 28 MiB/s wr, 268 op/s
Oct  1 09:53:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct  1 09:53:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct  1 09:53:02 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 09:53:02 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct  1 09:53:02 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 19 KiB/s wr, 335 op/s
Oct  1 09:53:04 np0005464214 nova_compute[260022]: 2025-10-01 13:53:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:04 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 14 KiB/s wr, 237 op/s
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.054457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786054505, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1607, "num_deletes": 512, "total_data_size": 2010339, "memory_usage": 2048352, "flush_reason": "Manual Compaction"}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786074669, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1966365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28851, "largest_seqno": 30457, "table_properties": {"data_size": 1959159, "index_size": 3768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17955, "raw_average_key_size": 19, "raw_value_size": 1942695, "raw_average_value_size": 2077, "num_data_blocks": 168, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326673, "oldest_key_time": 1759326673, "file_creation_time": 1759326786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 20246 microseconds, and 9181 cpu microseconds.
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.074711) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1966365 bytes OK
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.074756) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.076102) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.076114) EVENT_LOG_v1 {"time_micros": 1759326786076110, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.076133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2002166, prev total WAL file size 2002166, number of live WAL files 2.
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.077244) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1920KB)], [65(7352KB)]
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786077272, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9495493, "oldest_snapshot_seqno": -1}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5062 keys, 7625578 bytes, temperature: kUnknown
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786114543, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7625578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7591597, "index_size": 20239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 128126, "raw_average_key_size": 25, "raw_value_size": 7499818, "raw_average_value_size": 1481, "num_data_blocks": 828, "num_entries": 5062, "num_filter_entries": 5062, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.114898) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7625578 bytes
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.116149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.2 rd, 204.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 6099, records dropped: 1037 output_compression: NoCompression
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.116176) EVENT_LOG_v1 {"time_micros": 1759326786116164, "job": 36, "event": "compaction_finished", "compaction_time_micros": 37357, "compaction_time_cpu_micros": 18364, "output_level": 6, "num_output_files": 1, "total_output_size": 7625578, "num_input_records": 6099, "num_output_records": 5062, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786116972, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786119717, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.077152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:53:06 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:53:06 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 13 KiB/s wr, 228 op/s
Oct  1 09:53:07 np0005464214 podman[286204]: 2025-10-01 13:53:07.556260481 +0000 UTC m=+0.091098157 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  1 09:53:07 np0005464214 podman[286202]: 2025-10-01 13:53:07.56318033 +0000 UTC m=+0.109600824 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:53:07 np0005464214 podman[286203]: 2025-10-01 13:53:07.567978193 +0000 UTC m=+0.105597237 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 09:53:07 np0005464214 podman[286201]: 2025-10-01 13:53:07.595975193 +0000 UTC m=+0.142332265 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 09:53:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct  1 09:53:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct  1 09:53:07 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct  1 09:53:08 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 11 KiB/s wr, 140 op/s
Oct  1 09:53:10 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  1 09:53:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct  1 09:53:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct  1 09:53:12 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct  1 09:53:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:53:12.320 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:53:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:53:12.320 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:53:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:53:12.321 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:53:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:12 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 KiB/s wr, 57 op/s
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.386 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.387 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:53:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:53:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260530635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:53:13 np0005464214 nova_compute[260022]: 2025-10-01 13:53:13.849 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.128 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.131 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.132 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.247 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.263 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.264 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.264 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.329 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:53:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:53:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242115474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.807 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.816 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.844 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.847 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:53:14 np0005464214 nova_compute[260022]: 2025-10-01 13:53:14.847 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:53:14 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.7 KiB/s wr, 49 op/s
Oct  1 09:53:16 np0005464214 nova_compute[260022]: 2025-10-01 13:53:16.848 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:16 np0005464214 nova_compute[260022]: 2025-10-01 13:53:16.850 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:16 np0005464214 nova_compute[260022]: 2025-10-01 13:53:16.850 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:16 np0005464214 nova_compute[260022]: 2025-10-01 13:53:16.851 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:53:16 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 41 op/s
Oct  1 09:53:17 np0005464214 nova_compute[260022]: 2025-10-01 13:53:17.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:53:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:53:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:53:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:53:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:53:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:53:18 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 09:53:20 np0005464214 nova_compute[260022]: 2025-10-01 13:53:20.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:20 np0005464214 nova_compute[260022]: 2025-10-01 13:53:20.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:53:20 np0005464214 nova_compute[260022]: 2025-10-01 13:53:20.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:53:20 np0005464214 nova_compute[260022]: 2025-10-01 13:53:20.514 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:53:20 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 09:53:22 np0005464214 nova_compute[260022]: 2025-10-01 13:53:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct  1 09:53:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct  1 09:53:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct  1 09:53:22 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct  1 09:53:23 np0005464214 nova_compute[260022]: 2025-10-01 13:53:23.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:53:24 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct  1 09:53:26 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct  1 09:53:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:28 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:30 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:32 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:34 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:36 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:38 np0005464214 podman[286336]: 2025-10-01 13:53:38.526759212 +0000 UTC m=+0.066346759 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:53:38 np0005464214 podman[286334]: 2025-10-01 13:53:38.535573783 +0000 UTC m=+0.085711375 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  1 09:53:38 np0005464214 podman[286335]: 2025-10-01 13:53:38.56947445 +0000 UTC m=+0.113170158 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:53:38 np0005464214 podman[286333]: 2025-10-01 13:53:38.588701221 +0000 UTC m=+0.138511692 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:53:38 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:40 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:42 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:53:43.116 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:53:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:53:43.117 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:53:44 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:46 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:53:47
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta']
Oct  1 09:53:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2102413293
Oct  1 09:53:48 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:50 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:51 np0005464214 podman[286583]: 2025-10-01 13:53:51.613618178 +0000 UTC m=+0.070730799 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:53:51 np0005464214 podman[286583]: 2025-10-01 13:53:51.711330333 +0000 UTC m=+0.168442864 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 09:53:52 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:53:52.124 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:53:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:53:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:53:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:53:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:53:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:53:53 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4461026a-2da3-40a4-bcb3-b8048a76bbef does not exist
Oct  1 09:53:53 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2dd288d1-fa30-46f1-9d66-d548258e4640 does not exist
Oct  1 09:53:53 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 687be7e7-6f97-4dbf-a935-e64a6ea7305f does not exist
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:53:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.381220718 +0000 UTC m=+0.063690655 container create 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:53:54 np0005464214 systemd[1]: Started libpod-conmon-2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7.scope.
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.354283822 +0000 UTC m=+0.036753799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:53:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.481563796 +0000 UTC m=+0.164033783 container init 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.492410621 +0000 UTC m=+0.174880548 container start 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.496539042 +0000 UTC m=+0.179008969 container attach 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:53:54 np0005464214 pensive_swartz[287030]: 167 167
Oct  1 09:53:54 np0005464214 systemd[1]: libpod-2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7.scope: Deactivated successfully.
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.5015122 +0000 UTC m=+0.183982137 container died 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:53:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:53:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:53:54 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:53:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ee0d854e017ce7f661e23df3f8dc27a4b53ef942e4bf404e5a1f05fff56711dd-merged.mount: Deactivated successfully.
Oct  1 09:53:54 np0005464214 podman[287013]: 2025-10-01 13:53:54.5603611 +0000 UTC m=+0.242831027 container remove 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:53:54 np0005464214 systemd[1]: libpod-conmon-2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7.scope: Deactivated successfully.
Oct  1 09:53:54 np0005464214 podman[287052]: 2025-10-01 13:53:54.816597173 +0000 UTC m=+0.064382087 container create 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:53:54 np0005464214 systemd[1]: Started libpod-conmon-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope.
Oct  1 09:53:54 np0005464214 podman[287052]: 2025-10-01 13:53:54.791506125 +0000 UTC m=+0.039291029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:53:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:53:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:54 np0005464214 podman[287052]: 2025-10-01 13:53:54.934038054 +0000 UTC m=+0.181823028 container init 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:53:54 np0005464214 podman[287052]: 2025-10-01 13:53:54.955553388 +0000 UTC m=+0.203338302 container start 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 09:53:54 np0005464214 podman[287052]: 2025-10-01 13:53:54.959891685 +0000 UTC m=+0.207676599 container attach 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 09:53:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:53:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108378686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:53:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:53:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108378686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:53:56 np0005464214 eloquent_thompson[287069]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:53:56 np0005464214 eloquent_thompson[287069]: --> relative data size: 1.0
Oct  1 09:53:56 np0005464214 eloquent_thompson[287069]: --> All data devices are unavailable
Oct  1 09:53:56 np0005464214 systemd[1]: libpod-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope: Deactivated successfully.
Oct  1 09:53:56 np0005464214 systemd[1]: libpod-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope: Consumed 1.087s CPU time.
Oct  1 09:53:56 np0005464214 podman[287052]: 2025-10-01 13:53:56.090909893 +0000 UTC m=+1.338694777 container died 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:53:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3-merged.mount: Deactivated successfully.
Oct  1 09:53:56 np0005464214 podman[287052]: 2025-10-01 13:53:56.15059731 +0000 UTC m=+1.398382244 container remove 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:53:56 np0005464214 systemd[1]: libpod-conmon-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope: Deactivated successfully.
Oct  1 09:53:56 np0005464214 podman[287253]: 2025-10-01 13:53:56.988127283 +0000 UTC m=+0.064801201 container create 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:57 np0005464214 systemd[1]: Started libpod-conmon-268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc.scope.
Oct  1 09:53:57 np0005464214 podman[287253]: 2025-10-01 13:53:56.965007638 +0000 UTC m=+0.041681596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:53:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:53:57 np0005464214 podman[287253]: 2025-10-01 13:53:57.084363401 +0000 UTC m=+0.161037329 container init 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:53:57 np0005464214 podman[287253]: 2025-10-01 13:53:57.094395629 +0000 UTC m=+0.171069507 container start 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 09:53:57 np0005464214 podman[287253]: 2025-10-01 13:53:57.09817624 +0000 UTC m=+0.174850158 container attach 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:53:57 np0005464214 stoic_varahamihira[287268]: 167 167
Oct  1 09:53:57 np0005464214 systemd[1]: libpod-268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc.scope: Deactivated successfully.
Oct  1 09:53:57 np0005464214 podman[287253]: 2025-10-01 13:53:57.105076878 +0000 UTC m=+0.181750796 container died 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:53:57 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b96c131f4cfcc5f39644f582e1ff2f68f206289f674b40dfdac88a3d66b32698-merged.mount: Deactivated successfully.
Oct  1 09:53:57 np0005464214 podman[287253]: 2025-10-01 13:53:57.153043263 +0000 UTC m=+0.229717151 container remove 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 09:53:57 np0005464214 systemd[1]: libpod-conmon-268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc.scope: Deactivated successfully.
Oct  1 09:53:57 np0005464214 podman[287291]: 2025-10-01 13:53:57.398722249 +0000 UTC m=+0.056780635 container create 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:53:57 np0005464214 systemd[1]: Started libpod-conmon-627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067.scope.
Oct  1 09:53:57 np0005464214 podman[287291]: 2025-10-01 13:53:57.371434312 +0000 UTC m=+0.029492748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:53:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:53:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:53:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:53:57 np0005464214 podman[287291]: 2025-10-01 13:53:57.502744114 +0000 UTC m=+0.160802480 container init 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:53:57 np0005464214 podman[287291]: 2025-10-01 13:53:57.513156755 +0000 UTC m=+0.171215111 container start 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 09:53:57 np0005464214 podman[287291]: 2025-10-01 13:53:57.516390738 +0000 UTC m=+0.174449094 container attach 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:53:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:53:58 np0005464214 clever_panini[287307]: {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:    "0": [
Oct  1 09:53:58 np0005464214 clever_panini[287307]:        {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "devices": [
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "/dev/loop3"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            ],
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_name": "ceph_lv0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_size": "21470642176",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "name": "ceph_lv0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "tags": {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cluster_name": "ceph",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.crush_device_class": "",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.encrypted": "0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osd_id": "0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.type": "block",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.vdo": "0"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            },
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "type": "block",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "vg_name": "ceph_vg0"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:        }
Oct  1 09:53:58 np0005464214 clever_panini[287307]:    ],
Oct  1 09:53:58 np0005464214 clever_panini[287307]:    "1": [
Oct  1 09:53:58 np0005464214 clever_panini[287307]:        {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "devices": [
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "/dev/loop4"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            ],
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_name": "ceph_lv1",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_size": "21470642176",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "name": "ceph_lv1",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "tags": {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cluster_name": "ceph",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.crush_device_class": "",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.encrypted": "0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osd_id": "1",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.type": "block",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.vdo": "0"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            },
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "type": "block",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "vg_name": "ceph_vg1"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:        }
Oct  1 09:53:58 np0005464214 clever_panini[287307]:    ],
Oct  1 09:53:58 np0005464214 clever_panini[287307]:    "2": [
Oct  1 09:53:58 np0005464214 clever_panini[287307]:        {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "devices": [
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "/dev/loop5"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            ],
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_name": "ceph_lv2",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_size": "21470642176",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "name": "ceph_lv2",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "tags": {
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.cluster_name": "ceph",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.crush_device_class": "",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.encrypted": "0",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osd_id": "2",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.type": "block",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:                "ceph.vdo": "0"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            },
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "type": "block",
Oct  1 09:53:58 np0005464214 clever_panini[287307]:            "vg_name": "ceph_vg2"
Oct  1 09:53:58 np0005464214 clever_panini[287307]:        }
Oct  1 09:53:58 np0005464214 clever_panini[287307]:    ]
Oct  1 09:53:58 np0005464214 clever_panini[287307]: }
Oct  1 09:53:58 np0005464214 systemd[1]: libpod-627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067.scope: Deactivated successfully.
Oct  1 09:53:58 np0005464214 podman[287291]: 2025-10-01 13:53:58.276938735 +0000 UTC m=+0.934997121 container died 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:53:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1-merged.mount: Deactivated successfully.
Oct  1 09:53:58 np0005464214 podman[287291]: 2025-10-01 13:53:58.338928694 +0000 UTC m=+0.996987040 container remove 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:53:58 np0005464214 systemd[1]: libpod-conmon-627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067.scope: Deactivated successfully.
Oct  1 09:53:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.225451963 +0000 UTC m=+0.065685928 container create 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:53:59 np0005464214 systemd[1]: Started libpod-conmon-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope.
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.194924914 +0000 UTC m=+0.035158929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:53:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.33203952 +0000 UTC m=+0.172273525 container init 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.343592858 +0000 UTC m=+0.183826823 container start 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:53:59 np0005464214 thirsty_meitner[287487]: 167 167
Oct  1 09:53:59 np0005464214 systemd[1]: libpod-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope: Deactivated successfully.
Oct  1 09:53:59 np0005464214 conmon[287487]: conmon 723443c7c776976f72f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope/container/memory.events
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.360380871 +0000 UTC m=+0.200614896 container attach 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.361026501 +0000 UTC m=+0.201260426 container died 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:53:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-15aa125f70a38bdb19d3a021fb538692b7c06eb0c6c4b038555965fe1671345e-merged.mount: Deactivated successfully.
Oct  1 09:53:59 np0005464214 podman[287471]: 2025-10-01 13:53:59.404670318 +0000 UTC m=+0.244904273 container remove 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:53:59 np0005464214 systemd[1]: libpod-conmon-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope: Deactivated successfully.
Oct  1 09:53:59 np0005464214 podman[287511]: 2025-10-01 13:53:59.62694022 +0000 UTC m=+0.070458640 container create 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:53:59 np0005464214 systemd[1]: Started libpod-conmon-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope.
Oct  1 09:53:59 np0005464214 podman[287511]: 2025-10-01 13:53:59.598559388 +0000 UTC m=+0.042077878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:53:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:53:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:53:59 np0005464214 podman[287511]: 2025-10-01 13:53:59.739224148 +0000 UTC m=+0.182742628 container init 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 09:53:59 np0005464214 podman[287511]: 2025-10-01 13:53:59.75345337 +0000 UTC m=+0.196971800 container start 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:53:59 np0005464214 podman[287511]: 2025-10-01 13:53:59.757654854 +0000 UTC m=+0.201173284 container attach 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]: {
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "osd_id": 0,
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "type": "bluestore"
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:    },
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "osd_id": 2,
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "type": "bluestore"
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:    },
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "osd_id": 1,
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:        "type": "bluestore"
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]:    }
Oct  1 09:54:00 np0005464214 optimistic_albattani[287528]: }
Oct  1 09:54:00 np0005464214 systemd[1]: libpod-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope: Deactivated successfully.
Oct  1 09:54:00 np0005464214 podman[287511]: 2025-10-01 13:54:00.913328375 +0000 UTC m=+1.356846795 container died 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:54:00 np0005464214 systemd[1]: libpod-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope: Consumed 1.164s CPU time.
Oct  1 09:54:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806-merged.mount: Deactivated successfully.
Oct  1 09:54:00 np0005464214 podman[287511]: 2025-10-01 13:54:00.997124948 +0000 UTC m=+1.440643368 container remove 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:54:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:01 np0005464214 systemd[1]: libpod-conmon-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope: Deactivated successfully.
Oct  1 09:54:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:54:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:54:01 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:54:01 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:54:01 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a3e09655-3758-4374-af6c-ac7500b3c1a8 does not exist
Oct  1 09:54:01 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 72b3a531-36f3-4d4b-919c-26d1c93788ea does not exist
Oct  1 09:54:02 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:54:02 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:54:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:04 np0005464214 nova_compute[260022]: 2025-10-01 13:54:04.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:09 np0005464214 podman[287623]: 2025-10-01 13:54:09.538020875 +0000 UTC m=+0.078365511 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:54:09 np0005464214 podman[287625]: 2025-10-01 13:54:09.560709416 +0000 UTC m=+0.081478020 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:54:09 np0005464214 podman[287624]: 2025-10-01 13:54:09.561769761 +0000 UTC m=+0.096789148 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:54:09 np0005464214 podman[287622]: 2025-10-01 13:54:09.584661467 +0000 UTC m=+0.121457350 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Oct  1 09:54:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:54:12.322 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:54:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:54:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:54:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:54:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:54:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:54:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:54:13 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1946964686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.803 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.985 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.986 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5094MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:54:13 np0005464214 nova_compute[260022]: 2025-10-01 13:54:13.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.062 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.085 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.086 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.086 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.152 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:54:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:54:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486342018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.594 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.603 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.624 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.626 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:54:14 np0005464214 nova_compute[260022]: 2025-10-01 13:54:14.627 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:54:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:17 np0005464214 nova_compute[260022]: 2025-10-01 13:54:17.627 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:17 np0005464214 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:17 np0005464214 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:17 np0005464214 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:17 np0005464214 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:54:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:54:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:54:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:22 np0005464214 nova_compute[260022]: 2025-10-01 13:54:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:22 np0005464214 nova_compute[260022]: 2025-10-01 13:54:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:54:22 np0005464214 nova_compute[260022]: 2025-10-01 13:54:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:54:22 np0005464214 nova_compute[260022]: 2025-10-01 13:54:22.375 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:54:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:23 np0005464214 nova_compute[260022]: 2025-10-01 13:54:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:25 np0005464214 nova_compute[260022]: 2025-10-01 13:54:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:29 np0005464214 nova_compute[260022]: 2025-10-01 13:54:29.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:54:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:40 np0005464214 podman[287753]: 2025-10-01 13:54:40.528805736 +0000 UTC m=+0.058921413 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:54:40 np0005464214 podman[287748]: 2025-10-01 13:54:40.531059658 +0000 UTC m=+0.074561470 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 09:54:40 np0005464214 podman[287749]: 2025-10-01 13:54:40.53268894 +0000 UTC m=+0.075727717 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Oct  1 09:54:40 np0005464214 podman[287747]: 2025-10-01 13:54:40.613926682 +0000 UTC m=+0.163782306 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true)
Oct  1 09:54:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:54:43.662 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:54:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:54:43.664 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:54:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:54:47.666 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:54:47
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Oct  1 09:54:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:54:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:54:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:54:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134383617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:54:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:54:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134383617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:54:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:54:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:54:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:55:02 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 02b0803a-067b-46b3-aa3b-553dbd10cf5d does not exist
Oct  1 09:55:02 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 42c2fae2-4485-4776-935b-04729fbf8a24 does not exist
Oct  1 09:55:02 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev cc46bf8b-7217-4c72-a0e5-8722d1f255bd does not exist
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:55:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:03.008161324 +0000 UTC m=+0.048522857 container create 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:55:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:03 np0005464214 systemd[1]: Started libpod-conmon-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope.
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:02.986855175 +0000 UTC m=+0.027216668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:55:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:03.103052367 +0000 UTC m=+0.143413870 container init 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:03.111946911 +0000 UTC m=+0.152308404 container start 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:03.115975999 +0000 UTC m=+0.156337512 container attach 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:55:03 np0005464214 great_grothendieck[288113]: 167 167
Oct  1 09:55:03 np0005464214 systemd[1]: libpod-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope: Deactivated successfully.
Oct  1 09:55:03 np0005464214 conmon[288113]: conmon 770cc08890193b194482 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope/container/memory.events
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:03.11976556 +0000 UTC m=+0.160127083 container died 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  1 09:55:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1e6adf3597cc3127dff65741f3c45b600a5d6169ae72a9bd4d745d44e30ab1a6-merged.mount: Deactivated successfully.
Oct  1 09:55:03 np0005464214 podman[288096]: 2025-10-01 13:55:03.156435998 +0000 UTC m=+0.196797491 container remove 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:55:03 np0005464214 systemd[1]: libpod-conmon-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope: Deactivated successfully.
Oct  1 09:55:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:55:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:55:03 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:55:03 np0005464214 podman[288137]: 2025-10-01 13:55:03.34513447 +0000 UTC m=+0.054041852 container create 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:55:03 np0005464214 systemd[1]: Started libpod-conmon-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope.
Oct  1 09:55:03 np0005464214 podman[288137]: 2025-10-01 13:55:03.32347889 +0000 UTC m=+0.032386302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:55:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:55:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:03 np0005464214 podman[288137]: 2025-10-01 13:55:03.459900517 +0000 UTC m=+0.168807989 container init 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct  1 09:55:03 np0005464214 podman[288137]: 2025-10-01 13:55:03.468687417 +0000 UTC m=+0.177594799 container start 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:55:03 np0005464214 podman[288137]: 2025-10-01 13:55:03.472468028 +0000 UTC m=+0.181375510 container attach 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:55:04 np0005464214 nova_compute[260022]: 2025-10-01 13:55:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:04 np0005464214 silly_elgamal[288154]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:55:04 np0005464214 silly_elgamal[288154]: --> relative data size: 1.0
Oct  1 09:55:04 np0005464214 silly_elgamal[288154]: --> All data devices are unavailable
Oct  1 09:55:04 np0005464214 systemd[1]: libpod-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope: Deactivated successfully.
Oct  1 09:55:04 np0005464214 podman[288137]: 2025-10-01 13:55:04.619980719 +0000 UTC m=+1.328888121 container died 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:55:04 np0005464214 systemd[1]: libpod-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope: Consumed 1.107s CPU time.
Oct  1 09:55:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda-merged.mount: Deactivated successfully.
Oct  1 09:55:04 np0005464214 podman[288137]: 2025-10-01 13:55:04.695599529 +0000 UTC m=+1.404506941 container remove 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:55:04 np0005464214 systemd[1]: libpod-conmon-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope: Deactivated successfully.
Oct  1 09:55:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.562865301 +0000 UTC m=+0.065024433 container create 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 09:55:05 np0005464214 systemd[1]: Started libpod-conmon-67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b.scope.
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.533923059 +0000 UTC m=+0.036082231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:55:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.64944758 +0000 UTC m=+0.151606742 container init 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.657875678 +0000 UTC m=+0.160034800 container start 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.661722831 +0000 UTC m=+0.163881943 container attach 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:55:05 np0005464214 optimistic_lalande[288351]: 167 167
Oct  1 09:55:05 np0005464214 systemd[1]: libpod-67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b.scope: Deactivated successfully.
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.664912483 +0000 UTC m=+0.167071965 container died 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:55:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-32e1e71e34d25b5015ac1e7f989bdd5a6d959b3fd18ea5592c706ed2bd4a8b65-merged.mount: Deactivated successfully.
Oct  1 09:55:05 np0005464214 podman[288335]: 2025-10-01 13:55:05.713032636 +0000 UTC m=+0.215191748 container remove 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:55:05 np0005464214 systemd[1]: libpod-conmon-67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b.scope: Deactivated successfully.
Oct  1 09:55:05 np0005464214 podman[288374]: 2025-10-01 13:55:05.933070146 +0000 UTC m=+0.048174116 container create d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:55:05 np0005464214 systemd[1]: Started libpod-conmon-d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3.scope.
Oct  1 09:55:06 np0005464214 podman[288374]: 2025-10-01 13:55:05.907293215 +0000 UTC m=+0.022397215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:55:06 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:55:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:06 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:06 np0005464214 podman[288374]: 2025-10-01 13:55:06.045161698 +0000 UTC m=+0.160265738 container init d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:55:06 np0005464214 podman[288374]: 2025-10-01 13:55:06.057407498 +0000 UTC m=+0.172511428 container start d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 09:55:06 np0005464214 podman[288374]: 2025-10-01 13:55:06.061430176 +0000 UTC m=+0.176534206 container attach d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:55:06 np0005464214 practical_gates[288390]: {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:    "0": [
Oct  1 09:55:06 np0005464214 practical_gates[288390]:        {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "devices": [
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "/dev/loop3"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            ],
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_name": "ceph_lv0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_size": "21470642176",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "name": "ceph_lv0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "tags": {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cluster_name": "ceph",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.crush_device_class": "",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.encrypted": "0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osd_id": "0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.type": "block",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.vdo": "0"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            },
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "type": "block",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "vg_name": "ceph_vg0"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:        }
Oct  1 09:55:06 np0005464214 practical_gates[288390]:    ],
Oct  1 09:55:06 np0005464214 practical_gates[288390]:    "1": [
Oct  1 09:55:06 np0005464214 practical_gates[288390]:        {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "devices": [
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "/dev/loop4"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            ],
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_name": "ceph_lv1",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_size": "21470642176",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "name": "ceph_lv1",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "tags": {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cluster_name": "ceph",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.crush_device_class": "",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.encrypted": "0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osd_id": "1",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.type": "block",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.vdo": "0"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            },
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "type": "block",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "vg_name": "ceph_vg1"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:        }
Oct  1 09:55:06 np0005464214 practical_gates[288390]:    ],
Oct  1 09:55:06 np0005464214 practical_gates[288390]:    "2": [
Oct  1 09:55:06 np0005464214 practical_gates[288390]:        {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "devices": [
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "/dev/loop5"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            ],
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_name": "ceph_lv2",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_size": "21470642176",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "name": "ceph_lv2",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "tags": {
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.cluster_name": "ceph",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.crush_device_class": "",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.encrypted": "0",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osd_id": "2",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.type": "block",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:                "ceph.vdo": "0"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            },
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "type": "block",
Oct  1 09:55:06 np0005464214 practical_gates[288390]:            "vg_name": "ceph_vg2"
Oct  1 09:55:06 np0005464214 practical_gates[288390]:        }
Oct  1 09:55:06 np0005464214 practical_gates[288390]:    ]
Oct  1 09:55:06 np0005464214 practical_gates[288390]: }
Oct  1 09:55:06 np0005464214 systemd[1]: libpod-d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3.scope: Deactivated successfully.
Oct  1 09:55:06 np0005464214 podman[288374]: 2025-10-01 13:55:06.879068887 +0000 UTC m=+0.994172857 container died d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:55:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874-merged.mount: Deactivated successfully.
Oct  1 09:55:06 np0005464214 podman[288374]: 2025-10-01 13:55:06.972109152 +0000 UTC m=+1.087213112 container remove d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:55:06 np0005464214 systemd[1]: libpod-conmon-d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3.scope: Deactivated successfully.
Oct  1 09:55:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:07 np0005464214 podman[288550]: 2025-10-01 13:55:07.870381682 +0000 UTC m=+0.065670333 container create 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 09:55:07 np0005464214 systemd[1]: Started libpod-conmon-8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec.scope.
Oct  1 09:55:07 np0005464214 podman[288550]: 2025-10-01 13:55:07.843822026 +0000 UTC m=+0.039110727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:55:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:55:07 np0005464214 podman[288550]: 2025-10-01 13:55:07.980598104 +0000 UTC m=+0.175886745 container init 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:55:07 np0005464214 podman[288550]: 2025-10-01 13:55:07.991963717 +0000 UTC m=+0.187252347 container start 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:55:07 np0005464214 podman[288550]: 2025-10-01 13:55:07.996055846 +0000 UTC m=+0.191344497 container attach 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:55:07 np0005464214 musing_bassi[288566]: 167 167
Oct  1 09:55:07 np0005464214 systemd[1]: libpod-8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec.scope: Deactivated successfully.
Oct  1 09:55:08 np0005464214 podman[288550]: 2025-10-01 13:55:07.999936341 +0000 UTC m=+0.195224992 container died 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 09:55:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cf9370bf2da6b2612562df555a88f54020db058eea3fa288d0733971f7c6bbb4-merged.mount: Deactivated successfully.
Oct  1 09:55:08 np0005464214 podman[288550]: 2025-10-01 13:55:08.052121243 +0000 UTC m=+0.247409894 container remove 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:55:08 np0005464214 systemd[1]: libpod-conmon-8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec.scope: Deactivated successfully.
Oct  1 09:55:08 np0005464214 podman[288590]: 2025-10-01 13:55:08.313815921 +0000 UTC m=+0.078524813 container create b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 09:55:08 np0005464214 systemd[1]: Started libpod-conmon-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope.
Oct  1 09:55:08 np0005464214 podman[288590]: 2025-10-01 13:55:08.282905077 +0000 UTC m=+0.047614019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:55:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:55:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:55:08 np0005464214 podman[288590]: 2025-10-01 13:55:08.426003335 +0000 UTC m=+0.190712277 container init b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:55:08 np0005464214 podman[288590]: 2025-10-01 13:55:08.436478209 +0000 UTC m=+0.201187071 container start b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 09:55:08 np0005464214 podman[288590]: 2025-10-01 13:55:08.442187761 +0000 UTC m=+0.206896703 container attach b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 09:55:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]: {
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "osd_id": 0,
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "type": "bluestore"
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:    },
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "osd_id": 2,
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "type": "bluestore"
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:    },
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "osd_id": 1,
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:        "type": "bluestore"
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]:    }
Oct  1 09:55:09 np0005464214 practical_heisenberg[288607]: }
Oct  1 09:55:09 np0005464214 systemd[1]: libpod-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope: Deactivated successfully.
Oct  1 09:55:09 np0005464214 systemd[1]: libpod-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope: Consumed 1.096s CPU time.
Oct  1 09:55:09 np0005464214 podman[288590]: 2025-10-01 13:55:09.525789607 +0000 UTC m=+1.290498559 container died b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:55:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7-merged.mount: Deactivated successfully.
Oct  1 09:55:09 np0005464214 podman[288590]: 2025-10-01 13:55:09.598657548 +0000 UTC m=+1.363366410 container remove b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:55:09 np0005464214 systemd[1]: libpod-conmon-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope: Deactivated successfully.
Oct  1 09:55:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:55:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:55:09 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:55:09 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:55:09 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6c2c1a99-2775-401e-a802-85da88a793be does not exist
Oct  1 09:55:09 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2b90902a-b2fd-4bdd-ba94-db515baa9c59 does not exist
Oct  1 09:55:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:55:10 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:55:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct  1 09:55:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct  1 09:55:10 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct  1 09:55:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:11 np0005464214 podman[288704]: 2025-10-01 13:55:11.56223401 +0000 UTC m=+0.092598991 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:55:11 np0005464214 podman[288702]: 2025-10-01 13:55:11.563050537 +0000 UTC m=+0.104890704 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, io.buildah.version=1.41.3)
Oct  1 09:55:11 np0005464214 podman[288703]: 2025-10-01 13:55:11.585764701 +0000 UTC m=+0.124508509 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 09:55:11 np0005464214 podman[288701]: 2025-10-01 13:55:11.602109981 +0000 UTC m=+0.148026437 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller)
Oct  1 09:55:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:12.322 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:55:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:55:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:55:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct  1 09:55:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct  1 09:55:12 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct  1 09:55:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Oct  1 09:55:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct  1 09:55:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct  1 09:55:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct  1 09:55:14 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 09:55:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 KiB/s wr, 49 op/s
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:55:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct  1 09:55:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct  1 09:55:15 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct  1 09:55:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:55:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858657067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:55:15 np0005464214 nova_compute[260022]: 2025-10-01 13:55:15.813 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.019 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.020 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.021 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.021 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.153 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.170 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.171 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.171 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.343 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:55:16 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:55:16 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337627271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.837 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.844 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.877 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.879 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.879 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.881 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.882 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.896 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 09:55:16 np0005464214 nova_compute[260022]: 2025-10-01 13:55:16.896 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 6.3 KiB/s wr, 76 op/s
Oct  1 09:55:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:55:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:55:18 np0005464214 nova_compute[260022]: 2025-10-01 13:55:18.944 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:18 np0005464214 nova_compute[260022]: 2025-10-01 13:55:18.945 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:18 np0005464214 nova_compute[260022]: 2025-10-01 13:55:18.945 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:18 np0005464214 nova_compute[260022]: 2025-10-01 13:55:18.945 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:55:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 6.2 KiB/s wr, 103 op/s
Oct  1 09:55:19 np0005464214 nova_compute[260022]: 2025-10-01 13:55:19.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.9 KiB/s wr, 82 op/s
Oct  1 09:55:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct  1 09:55:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct  1 09:55:21 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct  1 09:55:22 np0005464214 nova_compute[260022]: 2025-10-01 13:55:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:22 np0005464214 nova_compute[260022]: 2025-10-01 13:55:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:55:22 np0005464214 nova_compute[260022]: 2025-10-01 13:55:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:55:22 np0005464214 nova_compute[260022]: 2025-10-01 13:55:22.374 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct  1 09:55:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct  1 09:55:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 8.2 KiB/s wr, 142 op/s
Oct  1 09:55:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.5 KiB/s wr, 59 op/s
Oct  1 09:55:25 np0005464214 nova_compute[260022]: 2025-10-01 13:55:25.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:25 np0005464214 nova_compute[260022]: 2025-10-01 13:55:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:25 np0005464214 nova_compute[260022]: 2025-10-01 13:55:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:55:25 np0005464214 nova_compute[260022]: 2025-10-01 13:55:25.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 09:55:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Oct  1 09:55:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct  1 09:55:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct  1 09:55:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct  1 09:55:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 50 op/s
Oct  1 09:55:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct  1 09:55:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 409 B/s wr, 4 op/s
Oct  1 09:55:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 409 B/s wr, 4 op/s
Oct  1 09:55:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct  1 09:55:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:42 np0005464214 podman[288831]: 2025-10-01 13:55:42.514496615 +0000 UTC m=+0.064907289 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 09:55:42 np0005464214 podman[288829]: 2025-10-01 13:55:42.529344199 +0000 UTC m=+0.085056812 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 09:55:42 np0005464214 podman[288832]: 2025-10-01 13:55:42.529434722 +0000 UTC m=+0.074103623 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 09:55:42 np0005464214 podman[288830]: 2025-10-01 13:55:42.529591147 +0000 UTC m=+0.082876222 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:55:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.834 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:55:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.835 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:55:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.846 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:55:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.847 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:55:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:55:47
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct  1 09:55:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:55:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:55:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:49 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:49.848 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:55:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:51 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:55:51.837 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:55:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:55:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2584635700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:55:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:55:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2584635700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:55:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:55:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:55:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:05 np0005464214 nova_compute[260022]: 2025-10-01 13:56:05.363 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:56:10 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ecf7dcc2-31fc-41b6-bda7-37f915ff8517 does not exist
Oct  1 09:56:10 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 418564d9-7474-45d6-94d1-20efdc194341 does not exist
Oct  1 09:56:10 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9beb9b0f-6891-4835-92e8-d0595faf9297 does not exist
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:56:10 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:56:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:56:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:56:11 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:56:11 np0005464214 podman[289182]: 2025-10-01 13:56:11.777158741 +0000 UTC m=+0.075455075 container create df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:56:11 np0005464214 podman[289182]: 2025-10-01 13:56:11.73225254 +0000 UTC m=+0.030548894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:56:11 np0005464214 systemd[1]: Started libpod-conmon-df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4.scope.
Oct  1 09:56:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:56:11 np0005464214 podman[289182]: 2025-10-01 13:56:11.907654028 +0000 UTC m=+0.205950422 container init df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:56:11 np0005464214 podman[289182]: 2025-10-01 13:56:11.917327147 +0000 UTC m=+0.215623511 container start df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 09:56:11 np0005464214 frosty_maxwell[289198]: 167 167
Oct  1 09:56:11 np0005464214 systemd[1]: libpod-df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4.scope: Deactivated successfully.
Oct  1 09:56:11 np0005464214 podman[289182]: 2025-10-01 13:56:11.944571515 +0000 UTC m=+0.242867929 container attach df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:56:11 np0005464214 podman[289182]: 2025-10-01 13:56:11.944995529 +0000 UTC m=+0.243291893 container died df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:56:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-045c3317014e1d9c3e046b2bca994ea3348307d86a2d702df6b60b5e53f741f1-merged.mount: Deactivated successfully.
Oct  1 09:56:12 np0005464214 podman[289182]: 2025-10-01 13:56:12.125026084 +0000 UTC m=+0.423322448 container remove df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:56:12 np0005464214 systemd[1]: libpod-conmon-df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4.scope: Deactivated successfully.
Oct  1 09:56:12 np0005464214 podman[289225]: 2025-10-01 13:56:12.323256551 +0000 UTC m=+0.067472431 container create 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:56:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:56:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:56:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:56:12.324 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:56:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:56:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:56:12 np0005464214 podman[289225]: 2025-10-01 13:56:12.285486807 +0000 UTC m=+0.029702667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:56:12 np0005464214 systemd[1]: Started libpod-conmon-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope.
Oct  1 09:56:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:56:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:12 np0005464214 podman[289225]: 2025-10-01 13:56:12.457200498 +0000 UTC m=+0.201416418 container init 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 09:56:12 np0005464214 podman[289225]: 2025-10-01 13:56:12.470051477 +0000 UTC m=+0.214267357 container start 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:12 np0005464214 podman[289225]: 2025-10-01 13:56:12.487181613 +0000 UTC m=+0.231397483 container attach 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:13 np0005464214 podman[289267]: 2025-10-01 13:56:13.545237464 +0000 UTC m=+0.065624532 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:56:13 np0005464214 podman[289266]: 2025-10-01 13:56:13.575691265 +0000 UTC m=+0.114827490 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:56:13 np0005464214 podman[289265]: 2025-10-01 13:56:13.576124889 +0000 UTC m=+0.113229090 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  1 09:56:13 np0005464214 podman[289263]: 2025-10-01 13:56:13.589999401 +0000 UTC m=+0.130928783 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923)
Oct  1 09:56:13 np0005464214 nervous_newton[289241]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:56:13 np0005464214 nervous_newton[289241]: --> relative data size: 1.0
Oct  1 09:56:13 np0005464214 nervous_newton[289241]: --> All data devices are unavailable
Oct  1 09:56:13 np0005464214 systemd[1]: libpod-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope: Deactivated successfully.
Oct  1 09:56:13 np0005464214 systemd[1]: libpod-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope: Consumed 1.102s CPU time.
Oct  1 09:56:13 np0005464214 podman[289225]: 2025-10-01 13:56:13.621643219 +0000 UTC m=+1.365859099 container died 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:56:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402-merged.mount: Deactivated successfully.
Oct  1 09:56:13 np0005464214 podman[289225]: 2025-10-01 13:56:13.683695396 +0000 UTC m=+1.427911256 container remove 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:56:13 np0005464214 systemd[1]: libpod-conmon-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope: Deactivated successfully.
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.473948674 +0000 UTC m=+0.052152562 container create 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:56:14 np0005464214 systemd[1]: Started libpod-conmon-0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba.scope.
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.446907263 +0000 UTC m=+0.025111231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:56:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.560946416 +0000 UTC m=+0.139150344 container init 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.573246339 +0000 UTC m=+0.151450237 container start 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:56:14 np0005464214 hungry_shamir[289515]: 167 167
Oct  1 09:56:14 np0005464214 systemd[1]: libpod-0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba.scope: Deactivated successfully.
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.57798957 +0000 UTC m=+0.156193478 container attach 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.578703893 +0000 UTC m=+0.156907801 container died 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 09:56:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-98fc521704a34fe307a438a17994d3020c72e92bc601fe93e2b477720a62dc77-merged.mount: Deactivated successfully.
Oct  1 09:56:14 np0005464214 podman[289499]: 2025-10-01 13:56:14.616448945 +0000 UTC m=+0.194652823 container remove 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 09:56:14 np0005464214 systemd[1]: libpod-conmon-0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba.scope: Deactivated successfully.
Oct  1 09:56:14 np0005464214 podman[289539]: 2025-10-01 13:56:14.796398558 +0000 UTC m=+0.046441500 container create fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:14 np0005464214 systemd[1]: Started libpod-conmon-fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23.scope.
Oct  1 09:56:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:56:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:14 np0005464214 podman[289539]: 2025-10-01 13:56:14.77699794 +0000 UTC m=+0.027040862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:56:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:14 np0005464214 podman[289539]: 2025-10-01 13:56:14.888651988 +0000 UTC m=+0.138694960 container init fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:56:14 np0005464214 podman[289539]: 2025-10-01 13:56:14.899462212 +0000 UTC m=+0.149505114 container start fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:14 np0005464214 podman[289539]: 2025-10-01 13:56:14.902760117 +0000 UTC m=+0.152803179 container attach fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:56:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]: {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:    "0": [
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:        {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "devices": [
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "/dev/loop3"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            ],
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_name": "ceph_lv0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_size": "21470642176",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "name": "ceph_lv0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "tags": {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cluster_name": "ceph",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.crush_device_class": "",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.encrypted": "0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osd_id": "0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.type": "block",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.vdo": "0"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            },
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "type": "block",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "vg_name": "ceph_vg0"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:        }
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:    ],
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:    "1": [
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:        {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "devices": [
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "/dev/loop4"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            ],
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_name": "ceph_lv1",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_size": "21470642176",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "name": "ceph_lv1",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "tags": {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cluster_name": "ceph",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.crush_device_class": "",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.encrypted": "0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osd_id": "1",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.type": "block",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.vdo": "0"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            },
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "type": "block",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "vg_name": "ceph_vg1"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:        }
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:    ],
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:    "2": [
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:        {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "devices": [
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "/dev/loop5"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            ],
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_name": "ceph_lv2",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_size": "21470642176",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "name": "ceph_lv2",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "tags": {
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.cluster_name": "ceph",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.crush_device_class": "",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.encrypted": "0",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osd_id": "2",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.type": "block",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:                "ceph.vdo": "0"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            },
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "type": "block",
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:            "vg_name": "ceph_vg2"
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:        }
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]:    ]
Oct  1 09:56:15 np0005464214 eloquent_wright[289557]: }
Oct  1 09:56:15 np0005464214 systemd[1]: libpod-fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23.scope: Deactivated successfully.
Oct  1 09:56:15 np0005464214 podman[289539]: 2025-10-01 13:56:15.635292717 +0000 UTC m=+0.885335649 container died fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a-merged.mount: Deactivated successfully.
Oct  1 09:56:15 np0005464214 podman[289539]: 2025-10-01 13:56:15.707416085 +0000 UTC m=+0.957458987 container remove fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:56:15 np0005464214 systemd[1]: libpod-conmon-fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23.scope: Deactivated successfully.
Oct  1 09:56:16 np0005464214 nova_compute[260022]: 2025-10-01 13:56:16.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.447228806 +0000 UTC m=+0.063086730 container create c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:56:16 np0005464214 systemd[1]: Started libpod-conmon-c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5.scope.
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.422318843 +0000 UTC m=+0.038176777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:56:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.548899316 +0000 UTC m=+0.164757260 container init c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.562387736 +0000 UTC m=+0.178245660 container start c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 09:56:16 np0005464214 determined_kapitsa[289733]: 167 167
Oct  1 09:56:16 np0005464214 systemd[1]: libpod-c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5.scope: Deactivated successfully.
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.568548252 +0000 UTC m=+0.184406156 container attach c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.568986486 +0000 UTC m=+0.184844400 container died c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:56:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0f5617bbb45820dabec968bc5812e3d4ef1dbf753a0b23d0daf2402084fb9e7f-merged.mount: Deactivated successfully.
Oct  1 09:56:16 np0005464214 podman[289717]: 2025-10-01 13:56:16.620073774 +0000 UTC m=+0.235931708 container remove c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 09:56:16 np0005464214 systemd[1]: libpod-conmon-c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5.scope: Deactivated successfully.
Oct  1 09:56:16 np0005464214 podman[289757]: 2025-10-01 13:56:16.87762224 +0000 UTC m=+0.067400148 container create 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:56:16 np0005464214 systemd[1]: Started libpod-conmon-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope.
Oct  1 09:56:16 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:56:16 np0005464214 podman[289757]: 2025-10-01 13:56:16.850437983 +0000 UTC m=+0.040215931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:56:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:16 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:56:16 np0005464214 podman[289757]: 2025-10-01 13:56:16.970293023 +0000 UTC m=+0.160070971 container init 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:56:16 np0005464214 podman[289757]: 2025-10-01 13:56:16.984310639 +0000 UTC m=+0.174088547 container start 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:56:16 np0005464214 podman[289757]: 2025-10-01 13:56:16.988716299 +0000 UTC m=+0.178494207 container attach 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:56:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]: {
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "osd_id": 0,
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "type": "bluestore"
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:    },
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "osd_id": 2,
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "type": "bluestore"
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:    },
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "osd_id": 1,
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:        "type": "bluestore"
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]:    }
Oct  1 09:56:17 np0005464214 sharp_kapitsa[289773]: }
Oct  1 09:56:17 np0005464214 systemd[1]: libpod-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope: Deactivated successfully.
Oct  1 09:56:17 np0005464214 podman[289757]: 2025-10-01 13:56:17.981338145 +0000 UTC m=+1.171116093 container died 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:17 np0005464214 systemd[1]: libpod-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope: Consumed 1.009s CPU time.
Oct  1 09:56:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98-merged.mount: Deactivated successfully.
Oct  1 09:56:18 np0005464214 podman[289757]: 2025-10-01 13:56:18.056420349 +0000 UTC m=+1.246198257 container remove 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:56:18 np0005464214 systemd[1]: libpod-conmon-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope: Deactivated successfully.
Oct  1 09:56:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:56:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:56:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:56:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:56:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9d9d4284-f04b-4a8a-904c-6dcd5f4259d8 does not exist
Oct  1 09:56:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev db0eae49-eca7-4b64-a018-2e549bbdc49c does not exist
Oct  1 09:56:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:56:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:56:18 np0005464214 nova_compute[260022]: 2025-10-01 13:56:18.902 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:56:18 np0005464214 nova_compute[260022]: 2025-10-01 13:56:18.904 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:56:18 np0005464214 nova_compute[260022]: 2025-10-01 13:56:18.904 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:56:18 np0005464214 nova_compute[260022]: 2025-10-01 13:56:18.904 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:56:18 np0005464214 nova_compute[260022]: 2025-10-01 13:56:18.905 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:56:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:56:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8267394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.317 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.473 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.475 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.476 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.477 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.583 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.602 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.603 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.603 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.762 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.847 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.848 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.863 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.881 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 09:56:19 np0005464214 nova_compute[260022]: 2025-10-01 13:56:19.930 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:56:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:56:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866529483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:56:20 np0005464214 nova_compute[260022]: 2025-10-01 13:56:20.353 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:56:20 np0005464214 nova_compute[260022]: 2025-10-01 13:56:20.358 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:56:20 np0005464214 nova_compute[260022]: 2025-10-01 13:56:20.408 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:56:20 np0005464214 nova_compute[260022]: 2025-10-01 13:56:20.411 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:56:20 np0005464214 nova_compute[260022]: 2025-10-01 13:56:20.411 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:56:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.414 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.414 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.415 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.415 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.439 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.439 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.439 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.440 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:22 np0005464214 nova_compute[260022]: 2025-10-01 13:56:22.440 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:56:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:25 np0005464214 nova_compute[260022]: 2025-10-01 13:56:25.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:25 np0005464214 nova_compute[260022]: 2025-10-01 13:56:25.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:29 np0005464214 nova_compute[260022]: 2025-10-01 13:56:29.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:56:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:56:43.949 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:56:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:56:43.951 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.169476) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004169586, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2108, "num_deletes": 258, "total_data_size": 3448727, "memory_usage": 3505120, "flush_reason": "Manual Compaction"}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004200067, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3380389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30458, "largest_seqno": 32565, "table_properties": {"data_size": 3370684, "index_size": 6199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19568, "raw_average_key_size": 20, "raw_value_size": 3351418, "raw_average_value_size": 3505, "num_data_blocks": 274, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326787, "oldest_key_time": 1759326787, "file_creation_time": 1759327004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 30667 microseconds, and 9462 cpu microseconds.
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.200156) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3380389 bytes OK
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.200182) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.201823) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.201840) EVENT_LOG_v1 {"time_micros": 1759327004201835, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.201863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3439869, prev total WAL file size 3439869, number of live WAL files 2.
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.202990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3301KB)], [68(7446KB)]
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004203050, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11005967, "oldest_snapshot_seqno": -1}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5488 keys, 9255872 bytes, temperature: kUnknown
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004272831, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9255872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9217310, "index_size": 23732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 137595, "raw_average_key_size": 25, "raw_value_size": 9116280, "raw_average_value_size": 1661, "num_data_blocks": 973, "num_entries": 5488, "num_filter_entries": 5488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.273207) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9255872 bytes
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.274509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.5 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 6018, records dropped: 530 output_compression: NoCompression
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.274541) EVENT_LOG_v1 {"time_micros": 1759327004274524, "job": 38, "event": "compaction_finished", "compaction_time_micros": 69877, "compaction_time_cpu_micros": 41478, "output_level": 6, "num_output_files": 1, "total_output_size": 9255872, "num_input_records": 6018, "num_output_records": 5488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004276048, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004279087, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.202890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:56:44 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:56:44 np0005464214 podman[289918]: 2025-10-01 13:56:44.525672763 +0000 UTC m=+0.067134150 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:56:44 np0005464214 podman[289917]: 2025-10-01 13:56:44.528990549 +0000 UTC m=+0.072586344 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 09:56:44 np0005464214 podman[289916]: 2025-10-01 13:56:44.532667215 +0000 UTC m=+0.080464214 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 09:56:44 np0005464214 podman[289915]: 2025-10-01 13:56:44.561415081 +0000 UTC m=+0.108896380 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:56:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:56:47
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.data']
Oct  1 09:56:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:56:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:56:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:51 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:56:51.953 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:56:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:56:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1765015405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:56:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:56:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1765015405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:56:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:56:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:56:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:07 np0005464214 nova_compute[260022]: 2025-10-01 13:57:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:12.324 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:57:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:12.324 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:57:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:57:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:15 np0005464214 podman[290000]: 2025-10-01 13:57:15.512674933 +0000 UTC m=+0.067196202 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:57:15 np0005464214 podman[290010]: 2025-10-01 13:57:15.512933421 +0000 UTC m=+0.055758207 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:57:15 np0005464214 podman[290005]: 2025-10-01 13:57:15.527751884 +0000 UTC m=+0.073642058 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3)
Oct  1 09:57:15 np0005464214 podman[289999]: 2025-10-01 13:57:15.551281334 +0000 UTC m=+0.112023160 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:57:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.377 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:57:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:57:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521539777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:57:18 np0005464214 nova_compute[260022]: 2025-10-01 13:57:18.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.054 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.056 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.056 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.057 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:57:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.160 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.198 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.199 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.199 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:57:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 39828c43-0274-40f7-b3a5-861716ded07a does not exist
Oct  1 09:57:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ce96aef-56b0-4bbc-b489-098a8c508f0d does not exist
Oct  1 09:57:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 31b0e9d3-ff67-4751-b1f2-67b8596c3c9a does not exist
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.279 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:57:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966520485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.744 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.752 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.771 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.774 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:57:19 np0005464214 nova_compute[260022]: 2025-10-01 13:57:19.775 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:57:19 np0005464214 podman[290395]: 2025-10-01 13:57:19.938979553 +0000 UTC m=+0.058306349 container create abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:57:19 np0005464214 systemd[1]: Started libpod-conmon-abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0.scope.
Oct  1 09:57:20 np0005464214 podman[290395]: 2025-10-01 13:57:19.908696058 +0000 UTC m=+0.028022904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:57:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:57:20 np0005464214 podman[290395]: 2025-10-01 13:57:20.034706083 +0000 UTC m=+0.154032889 container init abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:57:20 np0005464214 podman[290395]: 2025-10-01 13:57:20.044199606 +0000 UTC m=+0.163526392 container start abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:57:20 np0005464214 podman[290395]: 2025-10-01 13:57:20.047891873 +0000 UTC m=+0.167218659 container attach abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 09:57:20 np0005464214 amazing_sutherland[290411]: 167 167
Oct  1 09:57:20 np0005464214 systemd[1]: libpod-abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0.scope: Deactivated successfully.
Oct  1 09:57:20 np0005464214 podman[290395]: 2025-10-01 13:57:20.053541893 +0000 UTC m=+0.172868649 container died abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:57:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b9e4521c2fc648582d9843f9109315ceddc7132b64be57cdc56edc5f941369ef-merged.mount: Deactivated successfully.
Oct  1 09:57:20 np0005464214 podman[290395]: 2025-10-01 13:57:20.098140435 +0000 UTC m=+0.217467211 container remove abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:57:20 np0005464214 systemd[1]: libpod-conmon-abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0.scope: Deactivated successfully.
Oct  1 09:57:20 np0005464214 podman[290437]: 2025-10-01 13:57:20.296710101 +0000 UTC m=+0.061327796 container create f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 09:57:20 np0005464214 systemd[1]: Started libpod-conmon-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope.
Oct  1 09:57:20 np0005464214 podman[290437]: 2025-10-01 13:57:20.269047089 +0000 UTC m=+0.033664864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:57:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:57:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:20 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:20 np0005464214 podman[290437]: 2025-10-01 13:57:20.402709648 +0000 UTC m=+0.167327433 container init f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 09:57:20 np0005464214 podman[290437]: 2025-10-01 13:57:20.41751669 +0000 UTC m=+0.182134415 container start f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:57:20 np0005464214 podman[290437]: 2025-10-01 13:57:20.421680703 +0000 UTC m=+0.186298428 container attach f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  1 09:57:20 np0005464214 nova_compute[260022]: 2025-10-01 13:57:20.776 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:21 np0005464214 nova_compute[260022]: 2025-10-01 13:57:21.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:21 np0005464214 crazy_yonath[290454]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:57:21 np0005464214 crazy_yonath[290454]: --> relative data size: 1.0
Oct  1 09:57:21 np0005464214 crazy_yonath[290454]: --> All data devices are unavailable
Oct  1 09:57:21 np0005464214 systemd[1]: libpod-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope: Deactivated successfully.
Oct  1 09:57:21 np0005464214 podman[290437]: 2025-10-01 13:57:21.590490432 +0000 UTC m=+1.355108137 container died f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:57:21 np0005464214 systemd[1]: libpod-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope: Consumed 1.117s CPU time.
Oct  1 09:57:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128-merged.mount: Deactivated successfully.
Oct  1 09:57:21 np0005464214 podman[290437]: 2025-10-01 13:57:21.64406544 +0000 UTC m=+1.408683145 container remove f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:57:21 np0005464214 systemd[1]: libpod-conmon-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope: Deactivated successfully.
Oct  1 09:57:22 np0005464214 nova_compute[260022]: 2025-10-01 13:57:22.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:22 np0005464214 nova_compute[260022]: 2025-10-01 13:57:22.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:57:22 np0005464214 nova_compute[260022]: 2025-10-01 13:57:22.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:57:22 np0005464214 nova_compute[260022]: 2025-10-01 13:57:22.375 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:57:22 np0005464214 nova_compute[260022]: 2025-10-01 13:57:22.376 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.407116482 +0000 UTC m=+0.060795729 container create 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 09:57:22 np0005464214 systemd[1]: Started libpod-conmon-0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc.scope.
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.385655168 +0000 UTC m=+0.039334435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:57:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.514671209 +0000 UTC m=+0.168350526 container init 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.527112055 +0000 UTC m=+0.180791302 container start 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.531599158 +0000 UTC m=+0.185278475 container attach 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:57:22 np0005464214 optimistic_hypatia[290650]: 167 167
Oct  1 09:57:22 np0005464214 systemd[1]: libpod-0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc.scope: Deactivated successfully.
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.535400509 +0000 UTC m=+0.189079756 container died 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:57:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f461b5eaa30642aba0951546593080a2882abd27572bde96ac3c593450e6cebe-merged.mount: Deactivated successfully.
Oct  1 09:57:22 np0005464214 podman[290633]: 2025-10-01 13:57:22.586793897 +0000 UTC m=+0.240473154 container remove 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:57:22 np0005464214 systemd[1]: libpod-conmon-0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc.scope: Deactivated successfully.
Oct  1 09:57:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:22 np0005464214 podman[290674]: 2025-10-01 13:57:22.816931169 +0000 UTC m=+0.070026982 container create 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:57:22 np0005464214 systemd[1]: Started libpod-conmon-180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7.scope.
Oct  1 09:57:22 np0005464214 podman[290674]: 2025-10-01 13:57:22.788762161 +0000 UTC m=+0.041858064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:57:22 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:57:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:22 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:22 np0005464214 podman[290674]: 2025-10-01 13:57:22.928128163 +0000 UTC m=+0.181224066 container init 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  1 09:57:22 np0005464214 podman[290674]: 2025-10-01 13:57:22.94247219 +0000 UTC m=+0.195568013 container start 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:57:22 np0005464214 podman[290674]: 2025-10-01 13:57:22.945826006 +0000 UTC m=+0.198921829 container attach 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:57:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:23 np0005464214 youthful_villani[290690]: {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:    "0": [
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:        {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "devices": [
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "/dev/loop3"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            ],
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_name": "ceph_lv0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_size": "21470642176",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "name": "ceph_lv0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "tags": {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cluster_name": "ceph",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.crush_device_class": "",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.encrypted": "0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osd_id": "0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.type": "block",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.vdo": "0"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            },
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "type": "block",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "vg_name": "ceph_vg0"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:        }
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:    ],
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:    "1": [
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:        {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "devices": [
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "/dev/loop4"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            ],
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_name": "ceph_lv1",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_size": "21470642176",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "name": "ceph_lv1",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "tags": {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cluster_name": "ceph",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.crush_device_class": "",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.encrypted": "0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osd_id": "1",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.type": "block",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.vdo": "0"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            },
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "type": "block",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "vg_name": "ceph_vg1"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:        }
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:    ],
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:    "2": [
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:        {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "devices": [
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "/dev/loop5"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            ],
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_name": "ceph_lv2",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_size": "21470642176",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "name": "ceph_lv2",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "tags": {
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.cluster_name": "ceph",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.crush_device_class": "",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.encrypted": "0",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osd_id": "2",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.type": "block",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:                "ceph.vdo": "0"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            },
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "type": "block",
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:            "vg_name": "ceph_vg2"
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:        }
Oct  1 09:57:23 np0005464214 youthful_villani[290690]:    ]
Oct  1 09:57:23 np0005464214 youthful_villani[290690]: }
Oct  1 09:57:23 np0005464214 systemd[1]: libpod-180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7.scope: Deactivated successfully.
Oct  1 09:57:23 np0005464214 podman[290674]: 2025-10-01 13:57:23.736462477 +0000 UTC m=+0.989558330 container died 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:57:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62-merged.mount: Deactivated successfully.
Oct  1 09:57:23 np0005464214 podman[290674]: 2025-10-01 13:57:23.820855366 +0000 UTC m=+1.073951209 container remove 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:57:23 np0005464214 systemd[1]: libpod-conmon-180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7.scope: Deactivated successfully.
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.703666513 +0000 UTC m=+0.066224680 container create b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 09:57:24 np0005464214 systemd[1]: Started libpod-conmon-b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07.scope.
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.678002927 +0000 UTC m=+0.040561134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:57:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.809563578 +0000 UTC m=+0.172121795 container init b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.82280398 +0000 UTC m=+0.185362147 container start b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.826633092 +0000 UTC m=+0.189191259 container attach b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:57:24 np0005464214 objective_neumann[290872]: 167 167
Oct  1 09:57:24 np0005464214 systemd[1]: libpod-b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07.scope: Deactivated successfully.
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.830386761 +0000 UTC m=+0.192944918 container died b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:57:24 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e80a799d31ff9091e4616775c973b80eb161432dbcf0e0d9f9d8acbbfb19949f-merged.mount: Deactivated successfully.
Oct  1 09:57:24 np0005464214 podman[290856]: 2025-10-01 13:57:24.887979117 +0000 UTC m=+0.250537284 container remove b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 09:57:24 np0005464214 systemd[1]: libpod-conmon-b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07.scope: Deactivated successfully.
Oct  1 09:57:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:25 np0005464214 podman[290896]: 2025-10-01 13:57:25.15485162 +0000 UTC m=+0.076464727 container create 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:57:25 np0005464214 systemd[1]: Started libpod-conmon-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope.
Oct  1 09:57:25 np0005464214 podman[290896]: 2025-10-01 13:57:25.126439924 +0000 UTC m=+0.048053071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:57:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:57:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:57:25 np0005464214 podman[290896]: 2025-10-01 13:57:25.325347372 +0000 UTC m=+0.246960529 container init 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:57:25 np0005464214 podman[290896]: 2025-10-01 13:57:25.33750126 +0000 UTC m=+0.259114367 container start 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:57:25 np0005464214 nova_compute[260022]: 2025-10-01 13:57:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:25 np0005464214 podman[290896]: 2025-10-01 13:57:25.361495514 +0000 UTC m=+0.283108591 container attach 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:57:26 np0005464214 nova_compute[260022]: 2025-10-01 13:57:26.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:57:26 np0005464214 kind_euclid[290913]: {
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "osd_id": 0,
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "type": "bluestore"
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:    },
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "osd_id": 2,
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "type": "bluestore"
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:    },
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "osd_id": 1,
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:        "type": "bluestore"
Oct  1 09:57:26 np0005464214 kind_euclid[290913]:    }
Oct  1 09:57:26 np0005464214 kind_euclid[290913]: }
Oct  1 09:57:26 np0005464214 systemd[1]: libpod-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope: Deactivated successfully.
Oct  1 09:57:26 np0005464214 podman[290896]: 2025-10-01 13:57:26.440775422 +0000 UTC m=+1.362388539 container died 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:57:26 np0005464214 systemd[1]: libpod-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope: Consumed 1.068s CPU time.
Oct  1 09:57:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f-merged.mount: Deactivated successfully.
Oct  1 09:57:26 np0005464214 podman[290896]: 2025-10-01 13:57:26.518192369 +0000 UTC m=+1.439805476 container remove 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:57:26 np0005464214 systemd[1]: libpod-conmon-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope: Deactivated successfully.
Oct  1 09:57:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:57:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:57:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:57:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:57:26 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 41ca8277-13ce-4734-9a4c-618c747dfdd7 does not exist
Oct  1 09:57:26 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 755aa8c7-a409-40b2-a96a-bc7a6f13dd1b does not exist
Oct  1 09:57:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:57:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:57:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.258 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.262 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.265 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.267 161890 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpuzgco15p/privsep.sock']#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.996 161890 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.997 161890 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuzgco15p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.861 291014 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.868 291014 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.871 291014 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  1 09:57:29 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.872 291014 INFO oslo.privsep.daemon [-] privsep daemon running as pid 291014#033[00m
Oct  1 09:57:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.001 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[39f10355-3784-47db-b3a1-6f949329d476]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.938 291014 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:57:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.938 291014 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:57:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.938 291014 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:57:31 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:31.040 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[89c345c0-3604-44b8-bb69-cbef23c46725]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.722 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.724 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.726 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:32 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.728 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[fea7e3c5-4a73-4f67-9145-cf348fc27918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.277 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.278 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.279 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.280 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[4326b8c0-1de9-4e6e-a722-5ddaef919420]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:40 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.170 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:40 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.173 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:40 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.175 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:40 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.176 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[5342a964-f8cd-48ee-916d-b3bdd3e5d0ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.025 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.028 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.031 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.032 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[17acc2a0-62db-4e3c-a7b8-d63652f64fdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:44 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:44.727 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:44 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:44.729 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:57:44 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:44.731 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:57:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:46 np0005464214 podman[291022]: 2025-10-01 13:57:46.550260969 +0000 UTC m=+0.090667740 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:57:46 np0005464214 podman[291020]: 2025-10-01 13:57:46.568493999 +0000 UTC m=+0.109476889 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 09:57:46 np0005464214 podman[291021]: 2025-10-01 13:57:46.601486501 +0000 UTC m=+0.139685512 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 09:57:46 np0005464214 podman[291019]: 2025-10-01 13:57:46.62876158 +0000 UTC m=+0.169909414 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.430 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.432 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.434 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:47 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.435 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[93d27265-026f-4b65-bbc1-4744c57602c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:57:47
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct  1 09:57:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:57:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:57:49 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.024 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:49 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.026 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:49 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.028 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:49 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.029 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[a771a8ba-0383-48fc-8760-9cbfcd63a1b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:57:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/981184135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:57:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:57:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/981184135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:57:55 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.908 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:57:55 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.910 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:57:55 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.911 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:57:55 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.912 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[478d397f-8fa4-459d-8530-beab3304af25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:57:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:57:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:57:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:58:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7359 writes, 33K keys, 7359 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 7359 writes, 7359 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1396 writes, 6303 keys, 1396 commit groups, 1.0 writes per commit group, ingest: 8.98 MB, 0.01 MB/s#012Interval WAL: 1396 writes, 1396 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.6      2.21              0.16        19    0.116       0      0       0.0       0.0#012  L6      1/0    8.83 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5     45.8     37.4      3.58              0.50        18    0.199     87K    10K       0.0       0.0#012 Sum      1/0    8.83 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     28.3     29.9      5.79              0.66        37    0.156     87K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     78.0     80.8      0.51              0.16         8    0.064     23K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     45.8     37.4      3.58              0.50        18    0.199     87K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.6      2.20              0.16        18    0.122       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 5.8 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 20.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00016 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1299,19.52 MB,6.42065%) FilterBlock(38,249.17 KB,0.0800434%) IndexBlock(38,449.00 KB,0.144236%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 09:58:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:07 np0005464214 nova_compute[260022]: 2025-10-01 13:58:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:58:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:58:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:58:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:17 np0005464214 podman[291110]: 2025-10-01 13:58:17.5524183 +0000 UTC m=+0.081515598 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923)
Oct  1 09:58:17 np0005464214 podman[291103]: 2025-10-01 13:58:17.56432313 +0000 UTC m=+0.100844534 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct  1 09:58:17 np0005464214 podman[291102]: 2025-10-01 13:58:17.574216574 +0000 UTC m=+0.127995859 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:58:17 np0005464214 podman[291104]: 2025-10-01 13:58:17.582680934 +0000 UTC m=+0.123229837 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 09:58:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:58:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.378 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:58:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:58:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859132558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.791 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.983 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.985 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:58:18 np0005464214 nova_compute[260022]: 2025-10-01 13:58:18.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.082 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.097 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.098 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.098 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:58:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.151 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:58:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:58:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055574916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.571 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.579 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.610 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.613 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:58:19 np0005464214 nova_compute[260022]: 2025-10-01 13:58:19.614 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:58:20 np0005464214 nova_compute[260022]: 2025-10-01 13:58:20.615 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:22 np0005464214 nova_compute[260022]: 2025-10-01 13:58:22.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:23 np0005464214 nova_compute[260022]: 2025-10-01 13:58:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:23 np0005464214 nova_compute[260022]: 2025-10-01 13:58:23.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:58:23 np0005464214 nova_compute[260022]: 2025-10-01 13:58:23.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:58:23 np0005464214 nova_compute[260022]: 2025-10-01 13:58:23.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:58:24 np0005464214 nova_compute[260022]: 2025-10-01 13:58:24.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:25 np0005464214 nova_compute[260022]: 2025-10-01 13:58:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:26 np0005464214 nova_compute[260022]: 2025-10-01 13:58:26.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:58:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3bb15911-aab8-46cb-b4c9-4db6c8a9a10b does not exist
Oct  1 09:58:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9438680a-64ef-4095-bcca-8f95563050f3 does not exist
Oct  1 09:58:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6218f70a-1777-4de1-ada3-29b367836da1 does not exist
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:58:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:58:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:58:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:58:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.693809624 +0000 UTC m=+0.071684165 container create 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:58:28 np0005464214 systemd[1]: Started libpod-conmon-475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d.scope.
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.661772453 +0000 UTC m=+0.039646974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:58:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.817252887 +0000 UTC m=+0.195127468 container init 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.829250709 +0000 UTC m=+0.207125240 container start 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.833076362 +0000 UTC m=+0.210950913 container attach 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 09:58:28 np0005464214 busy_lumiere[291515]: 167 167
Oct  1 09:58:28 np0005464214 systemd[1]: libpod-475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d.scope: Deactivated successfully.
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.838790454 +0000 UTC m=+0.216664985 container died 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 09:58:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-682fac5bccea113c7f46852d65cd4eb2d1853ad6bd23db8e8fa89a9483e336bf-merged.mount: Deactivated successfully.
Oct  1 09:58:28 np0005464214 podman[291499]: 2025-10-01 13:58:28.891390639 +0000 UTC m=+0.269265170 container remove 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 09:58:28 np0005464214 systemd[1]: libpod-conmon-475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d.scope: Deactivated successfully.
Oct  1 09:58:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:29 np0005464214 podman[291537]: 2025-10-01 13:58:29.150473644 +0000 UTC m=+0.075616280 container create 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  1 09:58:29 np0005464214 systemd[1]: Started libpod-conmon-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope.
Oct  1 09:58:29 np0005464214 podman[291537]: 2025-10-01 13:58:29.119773796 +0000 UTC m=+0.044916482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:58:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:58:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:29 np0005464214 podman[291537]: 2025-10-01 13:58:29.278960318 +0000 UTC m=+0.204102954 container init 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:58:29 np0005464214 podman[291537]: 2025-10-01 13:58:29.295808344 +0000 UTC m=+0.220950970 container start 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:58:29 np0005464214 podman[291537]: 2025-10-01 13:58:29.301138024 +0000 UTC m=+0.226280700 container attach 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:58:30 np0005464214 silly_jones[291554]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:58:30 np0005464214 silly_jones[291554]: --> relative data size: 1.0
Oct  1 09:58:30 np0005464214 silly_jones[291554]: --> All data devices are unavailable
Oct  1 09:58:30 np0005464214 systemd[1]: libpod-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope: Deactivated successfully.
Oct  1 09:58:30 np0005464214 systemd[1]: libpod-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope: Consumed 1.135s CPU time.
Oct  1 09:58:30 np0005464214 podman[291537]: 2025-10-01 13:58:30.467075063 +0000 UTC m=+1.392217659 container died 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:58:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460-merged.mount: Deactivated successfully.
Oct  1 09:58:30 np0005464214 podman[291537]: 2025-10-01 13:58:30.530868026 +0000 UTC m=+1.456010642 container remove 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 09:58:30 np0005464214 systemd[1]: libpod-conmon-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope: Deactivated successfully.
Oct  1 09:58:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.312851031 +0000 UTC m=+0.025932527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.441357466 +0000 UTC m=+0.154438902 container create 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:58:31 np0005464214 systemd[1]: Started libpod-conmon-040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a.scope.
Oct  1 09:58:31 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.651504001 +0000 UTC m=+0.364585477 container init 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.663650209 +0000 UTC m=+0.376731655 container start 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:58:31 np0005464214 exciting_noyce[291755]: 167 167
Oct  1 09:58:31 np0005464214 systemd[1]: libpod-040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a.scope: Deactivated successfully.
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.718322 +0000 UTC m=+0.431403496 container attach 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.719463907 +0000 UTC m=+0.432545373 container died 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:58:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8eb4abcab61d3af1dd9f7f0449fc6b09f5e40007dc2a7a01ca4c839f817ab3df-merged.mount: Deactivated successfully.
Oct  1 09:58:31 np0005464214 podman[291739]: 2025-10-01 13:58:31.973624815 +0000 UTC m=+0.686706261 container remove 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 09:58:31 np0005464214 systemd[1]: libpod-conmon-040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a.scope: Deactivated successfully.
Oct  1 09:58:32 np0005464214 podman[291779]: 2025-10-01 13:58:32.258958815 +0000 UTC m=+0.124149526 container create c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:58:32 np0005464214 podman[291779]: 2025-10-01 13:58:32.174154974 +0000 UTC m=+0.039345755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:58:32 np0005464214 systemd[1]: Started libpod-conmon-c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf.scope.
Oct  1 09:58:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:58:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:32 np0005464214 podman[291779]: 2025-10-01 13:58:32.404060689 +0000 UTC m=+0.269251450 container init c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:58:32 np0005464214 podman[291779]: 2025-10-01 13:58:32.419240613 +0000 UTC m=+0.284431314 container start c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:58:32 np0005464214 podman[291779]: 2025-10-01 13:58:32.423090715 +0000 UTC m=+0.288281426 container attach c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:58:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]: {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:    "0": [
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:        {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "devices": [
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "/dev/loop3"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            ],
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_name": "ceph_lv0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_size": "21470642176",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "name": "ceph_lv0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "tags": {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cluster_name": "ceph",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.crush_device_class": "",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.encrypted": "0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osd_id": "0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.type": "block",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.vdo": "0"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            },
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "type": "block",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "vg_name": "ceph_vg0"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:        }
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:    ],
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:    "1": [
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:        {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "devices": [
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "/dev/loop4"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            ],
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_name": "ceph_lv1",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_size": "21470642176",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "name": "ceph_lv1",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "tags": {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cluster_name": "ceph",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.crush_device_class": "",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.encrypted": "0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osd_id": "1",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.type": "block",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.vdo": "0"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            },
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "type": "block",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "vg_name": "ceph_vg1"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:        }
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:    ],
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:    "2": [
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:        {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "devices": [
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "/dev/loop5"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            ],
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_name": "ceph_lv2",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_size": "21470642176",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "name": "ceph_lv2",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "tags": {
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.cluster_name": "ceph",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.crush_device_class": "",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.encrypted": "0",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osd_id": "2",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.type": "block",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:                "ceph.vdo": "0"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            },
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "type": "block",
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:            "vg_name": "ceph_vg2"
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:        }
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]:    ]
Oct  1 09:58:33 np0005464214 sweet_shaw[291795]: }
Oct  1 09:58:33 np0005464214 systemd[1]: libpod-c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf.scope: Deactivated successfully.
Oct  1 09:58:33 np0005464214 podman[291779]: 2025-10-01 13:58:33.227003039 +0000 UTC m=+1.092193770 container died c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:58:33 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb-merged.mount: Deactivated successfully.
Oct  1 09:58:33 np0005464214 podman[291779]: 2025-10-01 13:58:33.278863352 +0000 UTC m=+1.144054043 container remove c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 09:58:33 np0005464214 systemd[1]: libpod-conmon-c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf.scope: Deactivated successfully.
Oct  1 09:58:33 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.320 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8:0:1:f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:58:33 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.322 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated#033[00m
Oct  1 09:58:33 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.323 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:58:33 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.324 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[930f0193-2a13-4674-b7e4-96bd426cfad1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:58:33 np0005464214 nova_compute[260022]: 2025-10-01 13:58:33.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.086790583 +0000 UTC m=+0.067140870 container create cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:58:34 np0005464214 systemd[1]: Started libpod-conmon-cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b.scope.
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.057535271 +0000 UTC m=+0.037885608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:58:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.17803576 +0000 UTC m=+0.158386087 container init cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.186482229 +0000 UTC m=+0.166832476 container start cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.190823858 +0000 UTC m=+0.171174195 container attach cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:58:34 np0005464214 musing_ishizaka[291972]: 167 167
Oct  1 09:58:34 np0005464214 systemd[1]: libpod-cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b.scope: Deactivated successfully.
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.194187196 +0000 UTC m=+0.174537483 container died cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 09:58:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-dc212568b06175986d5b012a4fd8b5f3f6b395a28753ad35fe2fc5927e59944a-merged.mount: Deactivated successfully.
Oct  1 09:58:34 np0005464214 podman[291955]: 2025-10-01 13:58:34.248460844 +0000 UTC m=+0.228811121 container remove cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 09:58:34 np0005464214 systemd[1]: libpod-conmon-cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b.scope: Deactivated successfully.
Oct  1 09:58:34 np0005464214 podman[291996]: 2025-10-01 13:58:34.516248607 +0000 UTC m=+0.077450109 container create 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:58:34 np0005464214 systemd[1]: Started libpod-conmon-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope.
Oct  1 09:58:34 np0005464214 podman[291996]: 2025-10-01 13:58:34.480715864 +0000 UTC m=+0.041917416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:58:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:58:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:58:34 np0005464214 podman[291996]: 2025-10-01 13:58:34.61521622 +0000 UTC m=+0.176417732 container init 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:58:34 np0005464214 podman[291996]: 2025-10-01 13:58:34.629910368 +0000 UTC m=+0.191111880 container start 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 09:58:34 np0005464214 podman[291996]: 2025-10-01 13:58:34.634474953 +0000 UTC m=+0.195676515 container attach 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:58:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]: {
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "osd_id": 0,
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "type": "bluestore"
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:    },
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "osd_id": 2,
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "type": "bluestore"
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:    },
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "osd_id": 1,
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:        "type": "bluestore"
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]:    }
Oct  1 09:58:35 np0005464214 sharp_tesla[292012]: }
Oct  1 09:58:35 np0005464214 systemd[1]: libpod-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope: Deactivated successfully.
Oct  1 09:58:35 np0005464214 systemd[1]: libpod-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope: Consumed 1.029s CPU time.
Oct  1 09:58:35 np0005464214 podman[292045]: 2025-10-01 13:58:35.687436852 +0000 UTC m=+0.023788548 container died 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 09:58:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1-merged.mount: Deactivated successfully.
Oct  1 09:58:35 np0005464214 podman[292045]: 2025-10-01 13:58:35.757772413 +0000 UTC m=+0.094124079 container remove 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:58:35 np0005464214 systemd[1]: libpod-conmon-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope: Deactivated successfully.
Oct  1 09:58:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:58:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:58:35 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:58:35 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:58:35 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a15530c-400d-4007-a8a8-c863b935dfad does not exist
Oct  1 09:58:35 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d6cf6ac1-d7ab-4eb0-aa65-e79d885c4899 does not exist
Oct  1 09:58:36 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:58:36 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:58:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:44 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:44.926 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:58:44 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:44.932 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:58:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:58:47
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Oct  1 09:58:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:58:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:58:48 np0005464214 podman[292111]: 2025-10-01 13:58:48.571724056 +0000 UTC m=+0.116826793 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, config_id=multipathd, container_name=multipathd)
Oct  1 09:58:48 np0005464214 podman[292112]: 2025-10-01 13:58:48.580408173 +0000 UTC m=+0.120735998 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:58:48 np0005464214 podman[292113]: 2025-10-01 13:58:48.582398156 +0000 UTC m=+0.116989568 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 09:58:48 np0005464214 podman[292110]: 2025-10-01 13:58:48.585172835 +0000 UTC m=+0.137256485 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  1 09:58:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:53 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:58:53.935 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:58:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:58:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288620281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:58:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:58:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288620281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:58:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:58:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:58:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:09 np0005464214 nova_compute[260022]: 2025-10-01 13:59:09.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:59:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:59:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:59:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:59:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.377 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:59:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:59:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867861185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:59:18 np0005464214 nova_compute[260022]: 2025-10-01 13:59:18.837 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.037 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.039 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.114 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.130 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 09:59:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.190 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 09:59:19 np0005464214 podman[292233]: 2025-10-01 13:59:19.542367428 +0000 UTC m=+0.085760784 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct  1 09:59:19 np0005464214 podman[292232]: 2025-10-01 13:59:19.5518826 +0000 UTC m=+0.095312547 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct  1 09:59:19 np0005464214 podman[292231]: 2025-10-01 13:59:19.559001926 +0000 UTC m=+0.110049115 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  1 09:59:19 np0005464214 podman[292234]: 2025-10-01 13:59:19.569659215 +0000 UTC m=+0.097579890 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 09:59:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 09:59:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633742378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.761 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.769 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.796 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.799 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 09:59:19 np0005464214 nova_compute[260022]: 2025-10-01 13:59:19.799 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 09:59:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:21 np0005464214 nova_compute[260022]: 2025-10-01 13:59:21.800 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:24 np0005464214 nova_compute[260022]: 2025-10-01 13:59:24.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:24 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.835 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:0b:33 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99bf9a35-dc20-46cb-b2ee-481ce616830d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2b082376-98fa-47be-a696-7bcedb47b129) old=Port_Binding(mac=['fa:16:3e:e3:0b:33 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:59:24 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.837 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2b082376-98fa-47be-a696-7bcedb47b129 in datapath 50a4e638-13aa-4e3b-9865-06961dbe3cce updated#033[00m
Oct  1 09:59:24 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.838 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50a4e638-13aa-4e3b-9865-06961dbe3cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:59:24 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.839 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[b9362add-a25b-41ac-a3ca-c76001360d55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:59:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:25 np0005464214 nova_compute[260022]: 2025-10-01 13:59:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:25 np0005464214 nova_compute[260022]: 2025-10-01 13:59:25.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 09:59:25 np0005464214 nova_compute[260022]: 2025-10-01 13:59:25.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 09:59:25 np0005464214 nova_compute[260022]: 2025-10-01 13:59:25.362 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 09:59:25 np0005464214 nova_compute[260022]: 2025-10-01 13:59:25.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:26 np0005464214 nova_compute[260022]: 2025-10-01 13:59:26.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:27 np0005464214 nova_compute[260022]: 2025-10-01 13:59:27.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.718124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167718239, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1500, "num_deletes": 256, "total_data_size": 2412740, "memory_usage": 2453504, "flush_reason": "Manual Compaction"}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167733778, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2379310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32566, "largest_seqno": 34065, "table_properties": {"data_size": 2372191, "index_size": 4190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14217, "raw_average_key_size": 19, "raw_value_size": 2358102, "raw_average_value_size": 3248, "num_data_blocks": 187, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327005, "oldest_key_time": 1759327005, "file_creation_time": 1759327167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15694 microseconds, and 6600 cpu microseconds.
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.733835) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2379310 bytes OK
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.733862) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736018) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736030) EVENT_LOG_v1 {"time_micros": 1759327167736026, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736052) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2406160, prev total WAL file size 2406160, number of live WAL files 2.
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736972) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303130' seq:72057594037927935, type:22 .. '6C6F676D0031323632' seq:0, type:0; will stop at (end)
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2323KB)], [71(9038KB)]
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167737032, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11635182, "oldest_snapshot_seqno": -1}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5690 keys, 11528813 bytes, temperature: kUnknown
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167817157, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11528813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11485924, "index_size": 27547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 142668, "raw_average_key_size": 25, "raw_value_size": 11378344, "raw_average_value_size": 1999, "num_data_blocks": 1136, "num_entries": 5690, "num_filter_entries": 5690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.817683) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11528813 bytes
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.819296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 143.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 6214, records dropped: 524 output_compression: NoCompression
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.819324) EVENT_LOG_v1 {"time_micros": 1759327167819308, "job": 40, "event": "compaction_finished", "compaction_time_micros": 80205, "compaction_time_cpu_micros": 44264, "output_level": 6, "num_output_files": 1, "total_output_size": 11528813, "num_input_records": 6214, "num_output_records": 5690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167820438, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167823779, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:27 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.756486) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172756530, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 250, "total_data_size": 70904, "memory_usage": 76200, "flush_reason": "Manual Compaction"}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172805611, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 69916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34066, "largest_seqno": 34355, "table_properties": {"data_size": 67987, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5471, "raw_average_key_size": 20, "raw_value_size": 64204, "raw_average_value_size": 236, "num_data_blocks": 7, "num_entries": 271, "num_filter_entries": 271, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327168, "oldest_key_time": 1759327168, "file_creation_time": 1759327172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 49173 microseconds, and 1254 cpu microseconds.
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.805659) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 69916 bytes OK
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.805691) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.825793) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.825835) EVENT_LOG_v1 {"time_micros": 1759327172825825, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.825859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 68768, prev total WAL file size 68768, number of live WAL files 2.
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.826373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(68KB)], [74(10MB)]
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172826414, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11598729, "oldest_snapshot_seqno": -1}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5454 keys, 8306792 bytes, temperature: kUnknown
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172911544, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 8306792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8270332, "index_size": 21694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137954, "raw_average_key_size": 25, "raw_value_size": 8171667, "raw_average_value_size": 1498, "num_data_blocks": 891, "num_entries": 5454, "num_filter_entries": 5454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.911965) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 8306792 bytes
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.916966) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.1 rd, 97.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(284.7) write-amplify(118.8) OK, records in: 5961, records dropped: 507 output_compression: NoCompression
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.917009) EVENT_LOG_v1 {"time_micros": 1759327172916991, "job": 42, "event": "compaction_finished", "compaction_time_micros": 85227, "compaction_time_cpu_micros": 38238, "output_level": 6, "num_output_files": 1, "total_output_size": 8306792, "num_input_records": 5961, "num_output_records": 5454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172917219, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172921808, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.826281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:32 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 09:59:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:59:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:37 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 83f0502a-cb51-4945-9d51-9a0ed244a7f4 does not exist
Oct  1 09:59:37 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 981aa2d8-18dc-4178-98be-aa1a22ea9314 does not exist
Oct  1 09:59:37 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1240048c-ee5e-45f6-912e-801e57676533 does not exist
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 09:59:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:38 np0005464214 podman[292582]: 2025-10-01 13:59:37.97692739 +0000 UTC m=+0.026647257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:59:38 np0005464214 podman[292582]: 2025-10-01 13:59:38.161902374 +0000 UTC m=+0.211622231 container create 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:59:38 np0005464214 systemd[1]: Started libpod-conmon-0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e.scope.
Oct  1 09:59:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:59:38 np0005464214 podman[292582]: 2025-10-01 13:59:38.551528564 +0000 UTC m=+0.601248491 container init 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:59:38 np0005464214 podman[292582]: 2025-10-01 13:59:38.565198678 +0000 UTC m=+0.614918515 container start 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 09:59:38 np0005464214 wizardly_volhard[292599]: 167 167
Oct  1 09:59:38 np0005464214 systemd[1]: libpod-0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e.scope: Deactivated successfully.
Oct  1 09:59:38 np0005464214 podman[292582]: 2025-10-01 13:59:38.776450736 +0000 UTC m=+0.826170653 container attach 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 09:59:38 np0005464214 podman[292582]: 2025-10-01 13:59:38.778412438 +0000 UTC m=+0.828132295 container died 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:59:39 np0005464214 systemd[1]: var-lib-containers-storage-overlay-92fe14f8309538e9fffeb5ae5fba3703796c17dea40c92bf675d32665938767d-merged.mount: Deactivated successfully.
Oct  1 09:59:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:39 np0005464214 podman[292582]: 2025-10-01 13:59:39.546497097 +0000 UTC m=+1.596216964 container remove 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 09:59:39 np0005464214 systemd[1]: libpod-conmon-0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e.scope: Deactivated successfully.
Oct  1 09:59:39 np0005464214 podman[292623]: 2025-10-01 13:59:39.791614659 +0000 UTC m=+0.044736611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:59:40 np0005464214 podman[292623]: 2025-10-01 13:59:40.024024238 +0000 UTC m=+0.277146151 container create 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  1 09:59:40 np0005464214 systemd[1]: Started libpod-conmon-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope.
Oct  1 09:59:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:59:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:40 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:40 np0005464214 podman[292623]: 2025-10-01 13:59:40.536430038 +0000 UTC m=+0.789551990 container init 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 09:59:40 np0005464214 podman[292623]: 2025-10-01 13:59:40.548523703 +0000 UTC m=+0.801645575 container start 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 09:59:40 np0005464214 podman[292623]: 2025-10-01 13:59:40.741469559 +0000 UTC m=+0.994591511 container attach 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 09:59:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:41 np0005464214 brave_yonath[292640]: --> passed data devices: 0 physical, 3 LVM
Oct  1 09:59:41 np0005464214 brave_yonath[292640]: --> relative data size: 1.0
Oct  1 09:59:41 np0005464214 brave_yonath[292640]: --> All data devices are unavailable
Oct  1 09:59:41 np0005464214 systemd[1]: libpod-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope: Deactivated successfully.
Oct  1 09:59:41 np0005464214 systemd[1]: libpod-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope: Consumed 1.219s CPU time.
Oct  1 09:59:41 np0005464214 podman[292623]: 2025-10-01 13:59:41.828828585 +0000 UTC m=+2.081950467 container died 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:59:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976-merged.mount: Deactivated successfully.
Oct  1 09:59:42 np0005464214 podman[292623]: 2025-10-01 13:59:42.173485089 +0000 UTC m=+2.426606951 container remove 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:59:42 np0005464214 systemd[1]: libpod-conmon-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope: Deactivated successfully.
Oct  1 09:59:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.856289849 +0000 UTC m=+0.040895890 container create c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:59:42 np0005464214 systemd[1]: Started libpod-conmon-c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9.scope.
Oct  1 09:59:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.839862488 +0000 UTC m=+0.024468549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.942684072 +0000 UTC m=+0.127290133 container init c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.951024117 +0000 UTC m=+0.135630168 container start c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.954431445 +0000 UTC m=+0.139037506 container attach c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 09:59:42 np0005464214 recursing_keldysh[292840]: 167 167
Oct  1 09:59:42 np0005464214 systemd[1]: libpod-c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9.scope: Deactivated successfully.
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.957219064 +0000 UTC m=+0.141825105 container died c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:59:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-160e4a2195dca62c8084b12d337270584a799ca96eefc1b11a3b01f222305ce9-merged.mount: Deactivated successfully.
Oct  1 09:59:42 np0005464214 podman[292824]: 2025-10-01 13:59:42.992797193 +0000 UTC m=+0.177403254 container remove c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:59:43 np0005464214 systemd[1]: libpod-conmon-c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9.scope: Deactivated successfully.
Oct  1 09:59:43 np0005464214 podman[292863]: 2025-10-01 13:59:43.156873193 +0000 UTC m=+0.042969485 container create 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct  1 09:59:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:43 np0005464214 systemd[1]: Started libpod-conmon-0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e.scope.
Oct  1 09:59:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:59:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:43 np0005464214 podman[292863]: 2025-10-01 13:59:43.137059623 +0000 UTC m=+0.023155935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:59:43 np0005464214 podman[292863]: 2025-10-01 13:59:43.244932499 +0000 UTC m=+0.131028871 container init 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 09:59:43 np0005464214 podman[292863]: 2025-10-01 13:59:43.255871816 +0000 UTC m=+0.141968108 container start 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 09:59:43 np0005464214 podman[292863]: 2025-10-01 13:59:43.25912834 +0000 UTC m=+0.145224632 container attach 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 09:59:43 np0005464214 focused_sammet[292879]: {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:    "0": [
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:        {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "devices": [
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "/dev/loop3"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            ],
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_name": "ceph_lv0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_size": "21470642176",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "name": "ceph_lv0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "tags": {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cluster_name": "ceph",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.crush_device_class": "",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.encrypted": "0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osd_id": "0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.type": "block",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.vdo": "0"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            },
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "type": "block",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "vg_name": "ceph_vg0"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:        }
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:    ],
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:    "1": [
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:        {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "devices": [
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "/dev/loop4"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            ],
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_name": "ceph_lv1",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_size": "21470642176",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "name": "ceph_lv1",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "tags": {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cluster_name": "ceph",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.crush_device_class": "",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.encrypted": "0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osd_id": "1",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.type": "block",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.vdo": "0"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            },
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "type": "block",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "vg_name": "ceph_vg1"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:        }
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:    ],
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:    "2": [
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:        {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "devices": [
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "/dev/loop5"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            ],
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_name": "ceph_lv2",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_size": "21470642176",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "name": "ceph_lv2",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "tags": {
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cephx_lockbox_secret": "",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.cluster_name": "ceph",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.crush_device_class": "",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.encrypted": "0",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osd_id": "2",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.type": "block",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:                "ceph.vdo": "0"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            },
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "type": "block",
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:            "vg_name": "ceph_vg2"
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:        }
Oct  1 09:59:43 np0005464214 focused_sammet[292879]:    ]
Oct  1 09:59:43 np0005464214 focused_sammet[292879]: }
Oct  1 09:59:43 np0005464214 systemd[1]: libpod-0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e.scope: Deactivated successfully.
Oct  1 09:59:43 np0005464214 podman[292863]: 2025-10-01 13:59:43.99736558 +0000 UTC m=+0.883461892 container died 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 09:59:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41-merged.mount: Deactivated successfully.
Oct  1 09:59:44 np0005464214 podman[292863]: 2025-10-01 13:59:44.065899997 +0000 UTC m=+0.951996299 container remove 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 09:59:44 np0005464214 systemd[1]: libpod-conmon-0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e.scope: Deactivated successfully.
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.718685583 +0000 UTC m=+0.059644084 container create 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 09:59:44 np0005464214 systemd[1]: Started libpod-conmon-81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe.scope.
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.688958189 +0000 UTC m=+0.029916740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:59:44 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.820588479 +0000 UTC m=+0.161546990 container init 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.832617161 +0000 UTC m=+0.173575622 container start 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.836611417 +0000 UTC m=+0.177569928 container attach 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 09:59:44 np0005464214 stupefied_ardinghelli[293057]: 167 167
Oct  1 09:59:44 np0005464214 systemd[1]: libpod-81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe.scope: Deactivated successfully.
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.839994295 +0000 UTC m=+0.180952756 container died 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 09:59:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-700a99753e27ab8359371e7f3744e2068468381d724c8a1344479258bd9bb9bd-merged.mount: Deactivated successfully.
Oct  1 09:59:44 np0005464214 podman[293040]: 2025-10-01 13:59:44.884296002 +0000 UTC m=+0.225254463 container remove 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  1 09:59:44 np0005464214 systemd[1]: libpod-conmon-81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe.scope: Deactivated successfully.
Oct  1 09:59:45 np0005464214 podman[293081]: 2025-10-01 13:59:45.124887271 +0000 UTC m=+0.070369616 container create 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 09:59:45 np0005464214 systemd[1]: Started libpod-conmon-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope.
Oct  1 09:59:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:45 np0005464214 podman[293081]: 2025-10-01 13:59:45.096987435 +0000 UTC m=+0.042469840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 09:59:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 09:59:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 09:59:45 np0005464214 podman[293081]: 2025-10-01 13:59:45.234546113 +0000 UTC m=+0.180028518 container init 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 09:59:45 np0005464214 podman[293081]: 2025-10-01 13:59:45.251515222 +0000 UTC m=+0.196997587 container start 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 09:59:45 np0005464214 podman[293081]: 2025-10-01 13:59:45.264604137 +0000 UTC m=+0.210086502 container attach 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 09:59:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:45.317 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:59:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:45.322 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 09:59:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:45.323 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]: {
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "osd_id": 0,
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "type": "bluestore"
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:    },
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "osd_id": 2,
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "type": "bluestore"
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:    },
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "osd_id": 1,
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:        "type": "bluestore"
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]:    }
Oct  1 09:59:46 np0005464214 jovial_hamilton[293097]: }
Oct  1 09:59:46 np0005464214 systemd[1]: libpod-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope: Deactivated successfully.
Oct  1 09:59:46 np0005464214 systemd[1]: libpod-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope: Consumed 1.167s CPU time.
Oct  1 09:59:46 np0005464214 podman[293081]: 2025-10-01 13:59:46.411007948 +0000 UTC m=+1.356490363 container died 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 09:59:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783-merged.mount: Deactivated successfully.
Oct  1 09:59:46 np0005464214 podman[293081]: 2025-10-01 13:59:46.526757453 +0000 UTC m=+1.472239788 container remove 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 09:59:46 np0005464214 systemd[1]: libpod-conmon-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope: Deactivated successfully.
Oct  1 09:59:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 09:59:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:59:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 09:59:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:59:46 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 984596d4-e165-4175-8acf-8bad6e468dda does not exist
Oct  1 09:59:46 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 693d68e3-9deb-4b5e-8861-f2615a8adb10 does not exist
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:59:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:59:47
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Oct  1 09:59:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 09:59:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 09:59:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:50 np0005464214 podman[293197]: 2025-10-01 13:59:50.510597607 +0000 UTC m=+0.068003111 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 09:59:50 np0005464214 podman[293199]: 2025-10-01 13:59:50.528459334 +0000 UTC m=+0.072927167 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 09:59:50 np0005464214 podman[293196]: 2025-10-01 13:59:50.537498021 +0000 UTC m=+0.097228468 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 09:59:50 np0005464214 podman[293198]: 2025-10-01 13:59:50.538152932 +0000 UTC m=+0.095153183 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 09:59:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:51 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.436 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7db4a1de-f9f9-4576-94fa-85c21b229e1a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4c794d15-edd5-4b11-8666-6aeef634f979) old=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 09:59:51 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.438 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4c794d15-edd5-4b11-8666-6aeef634f979 in datapath d459f90f-6a0c-444c-a0eb-e01cde881120 updated#033[00m
Oct  1 09:59:51 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.439 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d459f90f-6a0c-444c-a0eb-e01cde881120, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 09:59:51 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.441 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[964f635c-abfb-40bd-a9d9-5b0fc4b6ef8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 09:59:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 09:59:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918538823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 09:59:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 09:59:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918538823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 09:59:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 09:59:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7727 writes, 28K keys, 7727 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7727 writes, 1851 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 973 writes, 2331 keys, 973 commit groups, 1.0 writes per commit group, ingest: 1.18 MB, 0.00 MB/s#012Interval WAL: 973 writes, 437 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 09:59:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 09:59:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 09:59:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:00:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 9156 writes, 34K keys, 9156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9156 writes, 2284 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1205 writes, 3436 keys, 1205 commit groups, 1.0 writes per commit group, ingest: 1.86 MB, 0.00 MB/s#012Interval WAL: 1205 writes, 535 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:00:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:02 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.688 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7db4a1de-f9f9-4576-94fa-85c21b229e1a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4c794d15-edd5-4b11-8666-6aeef634f979) old=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:00:02 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.690 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4c794d15-edd5-4b11-8666-6aeef634f979 in datapath d459f90f-6a0c-444c-a0eb-e01cde881120 updated#033[00m
Oct  1 10:00:02 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.692 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d459f90f-6a0c-444c-a0eb-e01cde881120, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:00:02 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.693 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[c1aee2cf-8110-41ab-ba8a-f71a33d666bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:00:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:00:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8168 writes, 30K keys, 8168 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8168 writes, 2028 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1293 writes, 3208 keys, 1293 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s#012Interval WAL: 1293 writes, 587 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:00:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 10:00:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:11 np0005464214 nova_compute[260022]: 2025-10-01 14:00:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:00:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:12.327 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:00:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:12.327 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:00:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:17 np0005464214 nova_compute[260022]: 2025-10-01 14:00:17.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:00:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:00:18 np0005464214 nova_compute[260022]: 2025-10-01 14:00:18.375 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:18 np0005464214 nova_compute[260022]: 2025-10-01 14:00:18.376 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:00:18 np0005464214 nova_compute[260022]: 2025-10-01 14:00:18.376 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:18 np0005464214 nova_compute[260022]: 2025-10-01 14:00:18.376 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 10:00:18 np0005464214 nova_compute[260022]: 2025-10-01 14:00:18.395 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 10:00:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.407 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.407 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.408 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.408 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:00:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:00:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337189917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:00:19 np0005464214 nova_compute[260022]: 2025-10-01 14:00:19.866 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.066 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.067 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.067 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.068 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.149 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.174 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.190 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 1cecb2c6-69e6-4006-b96b-9e11a42c9cb1 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.191 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.191 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.431 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:00:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:00:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452765038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.893 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.901 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.920 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.922 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:00:20 np0005464214 nova_compute[260022]: 2025-10-01 14:00:20.922 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:00:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:21 np0005464214 podman[293320]: 2025-10-01 14:00:21.539301815 +0000 UTC m=+0.077910165 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:00:21 np0005464214 podman[293318]: 2025-10-01 14:00:21.54384204 +0000 UTC m=+0.082796701 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 10:00:21 np0005464214 podman[293317]: 2025-10-01 14:00:21.569351109 +0000 UTC m=+0.113694121 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 10:00:21 np0005464214 podman[293319]: 2025-10-01 14:00:21.581550226 +0000 UTC m=+0.115524179 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:00:21 np0005464214 nova_compute[260022]: 2025-10-01 14:00:21.903 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:25 np0005464214 nova_compute[260022]: 2025-10-01 14:00:25.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:25 np0005464214 nova_compute[260022]: 2025-10-01 14:00:25.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:00:25 np0005464214 nova_compute[260022]: 2025-10-01 14:00:25.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:00:25 np0005464214 nova_compute[260022]: 2025-10-01 14:00:25.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:00:25 np0005464214 nova_compute[260022]: 2025-10-01 14:00:25.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:26 np0005464214 nova_compute[260022]: 2025-10-01 14:00:26.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:26 np0005464214 nova_compute[260022]: 2025-10-01 14:00:26.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:29 np0005464214 nova_compute[260022]: 2025-10-01 14:00:29.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:32 np0005464214 nova_compute[260022]: 2025-10-01 14:00:32.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:32 np0005464214 nova_compute[260022]: 2025-10-01 14:00:32.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 10:00:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.282314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236282366, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 757, "num_deletes": 251, "total_data_size": 961585, "memory_usage": 974984, "flush_reason": "Manual Compaction"}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236290944, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 952583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34356, "largest_seqno": 35112, "table_properties": {"data_size": 948651, "index_size": 1712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8738, "raw_average_key_size": 19, "raw_value_size": 940789, "raw_average_value_size": 2095, "num_data_blocks": 76, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327173, "oldest_key_time": 1759327173, "file_creation_time": 1759327236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 8687 microseconds, and 5477 cpu microseconds.
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.290997) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 952583 bytes OK
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.291029) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.292690) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.292712) EVENT_LOG_v1 {"time_micros": 1759327236292705, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.292762) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 957739, prev total WAL file size 957739, number of live WAL files 2.
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.293628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(930KB)], [77(8112KB)]
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236293699, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9259375, "oldest_snapshot_seqno": -1}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5389 keys, 7498110 bytes, temperature: kUnknown
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236353001, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7498110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7463004, "index_size": 20532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 137265, "raw_average_key_size": 25, "raw_value_size": 7366295, "raw_average_value_size": 1366, "num_data_blocks": 835, "num_entries": 5389, "num_filter_entries": 5389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.353300) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7498110 bytes
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.355751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.9 rd, 126.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(17.6) write-amplify(7.9) OK, records in: 5903, records dropped: 514 output_compression: NoCompression
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.355781) EVENT_LOG_v1 {"time_micros": 1759327236355767, "job": 44, "event": "compaction_finished", "compaction_time_micros": 59391, "compaction_time_cpu_micros": 34064, "output_level": 6, "num_output_files": 1, "total_output_size": 7498110, "num_input_records": 5903, "num_output_records": 5389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236356194, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236359146, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.293525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:00:36 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:00:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:37 np0005464214 nova_compute[260022]: 2025-10-01 14:00:37.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:45.479 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:00:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:45.481 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:00:46 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:00:46.483 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:00:47
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root']
Oct  1 10:00:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a372d3fe-1915-4d22-b989-f4e718869959 does not exist
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a6bc76c2-df8c-4df9-a224-5b4d6ac69046 does not exist
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0d1bdaae-986e-4371-bfb6-c07ebbe68dca does not exist
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:00:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:00:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:00:48 np0005464214 podman[293678]: 2025-10-01 14:00:48.87449189 +0000 UTC m=+0.069425235 container create 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:00:48 np0005464214 systemd[1]: Started libpod-conmon-871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a.scope.
Oct  1 10:00:48 np0005464214 podman[293678]: 2025-10-01 14:00:48.848601348 +0000 UTC m=+0.043534703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:00:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:00:48 np0005464214 podman[293678]: 2025-10-01 14:00:48.975037562 +0000 UTC m=+0.169970917 container init 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 10:00:48 np0005464214 podman[293678]: 2025-10-01 14:00:48.987188788 +0000 UTC m=+0.182122123 container start 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 10:00:48 np0005464214 podman[293678]: 2025-10-01 14:00:48.99351807 +0000 UTC m=+0.188451415 container attach 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:00:48 np0005464214 zealous_mahavira[293695]: 167 167
Oct  1 10:00:48 np0005464214 systemd[1]: libpod-871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a.scope: Deactivated successfully.
Oct  1 10:00:48 np0005464214 podman[293678]: 2025-10-01 14:00:48.994846531 +0000 UTC m=+0.189779896 container died 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:00:49 np0005464214 systemd[1]: var-lib-containers-storage-overlay-93e17d8a8dd5d0d80f60aa1e762772c9f33ccb5404df9cddfb70e31e5ecc0cc4-merged.mount: Deactivated successfully.
Oct  1 10:00:49 np0005464214 podman[293678]: 2025-10-01 14:00:49.050938172 +0000 UTC m=+0.245871527 container remove 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:00:49 np0005464214 systemd[1]: libpod-conmon-871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a.scope: Deactivated successfully.
Oct  1 10:00:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:49 np0005464214 podman[293720]: 2025-10-01 14:00:49.343634237 +0000 UTC m=+0.120493807 container create 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:00:49 np0005464214 podman[293720]: 2025-10-01 14:00:49.264496253 +0000 UTC m=+0.041355873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:00:49 np0005464214 systemd[1]: Started libpod-conmon-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope.
Oct  1 10:00:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:00:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:49 np0005464214 podman[293720]: 2025-10-01 14:00:49.464521194 +0000 UTC m=+0.241380814 container init 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:00:49 np0005464214 podman[293720]: 2025-10-01 14:00:49.476107792 +0000 UTC m=+0.252967342 container start 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:00:49 np0005464214 podman[293720]: 2025-10-01 14:00:49.48201818 +0000 UTC m=+0.258877730 container attach 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:00:49 np0005464214 nova_compute[260022]: 2025-10-01 14:00:49.758 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:00:50 np0005464214 happy_jepsen[293736]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:00:50 np0005464214 happy_jepsen[293736]: --> relative data size: 1.0
Oct  1 10:00:50 np0005464214 happy_jepsen[293736]: --> All data devices are unavailable
Oct  1 10:00:50 np0005464214 systemd[1]: libpod-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope: Deactivated successfully.
Oct  1 10:00:50 np0005464214 systemd[1]: libpod-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope: Consumed 1.113s CPU time.
Oct  1 10:00:50 np0005464214 podman[293765]: 2025-10-01 14:00:50.693832987 +0000 UTC m=+0.036489269 container died 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:00:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927-merged.mount: Deactivated successfully.
Oct  1 10:00:50 np0005464214 podman[293765]: 2025-10-01 14:00:50.762816958 +0000 UTC m=+0.105473200 container remove 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:00:50 np0005464214 systemd[1]: libpod-conmon-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope: Deactivated successfully.
Oct  1 10:00:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.633932077 +0000 UTC m=+0.066811692 container create 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.60564659 +0000 UTC m=+0.038526255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:00:51 np0005464214 systemd[1]: Started libpod-conmon-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope.
Oct  1 10:00:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.78110174 +0000 UTC m=+0.213981335 container init 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.78833565 +0000 UTC m=+0.221215255 container start 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.7930414 +0000 UTC m=+0.225920995 container attach 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  1 10:00:51 np0005464214 epic_dewdney[293972]: 167 167
Oct  1 10:00:51 np0005464214 systemd[1]: libpod-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope: Deactivated successfully.
Oct  1 10:00:51 np0005464214 conmon[293972]: conmon 3f14b18ebf8282e0b63d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope/container/memory.events
Oct  1 10:00:51 np0005464214 podman[293936]: 2025-10-01 14:00:51.796503269 +0000 UTC m=+0.096234056 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:00:51 np0005464214 podman[293935]: 2025-10-01 14:00:51.797396388 +0000 UTC m=+0.100981628 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible)
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.797494101 +0000 UTC m=+0.230373726 container died 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 10:00:51 np0005464214 podman[293942]: 2025-10-01 14:00:51.810787653 +0000 UTC m=+0.105937945 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  1 10:00:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-32b3c50bc0f21ae937cbc0b7849fcb712e587ac77834b67a8e3574c50fbaa183-merged.mount: Deactivated successfully.
Oct  1 10:00:51 np0005464214 podman[293932]: 2025-10-01 14:00:51.829664893 +0000 UTC m=+0.146731831 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:00:51 np0005464214 podman[293917]: 2025-10-01 14:00:51.842098137 +0000 UTC m=+0.274977722 container remove 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:00:51 np0005464214 systemd[1]: libpod-conmon-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope: Deactivated successfully.
Oct  1 10:00:52 np0005464214 podman[294037]: 2025-10-01 14:00:52.077579484 +0000 UTC m=+0.078281647 container create 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:00:52 np0005464214 systemd[1]: Started libpod-conmon-87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0.scope.
Oct  1 10:00:52 np0005464214 podman[294037]: 2025-10-01 14:00:52.049480852 +0000 UTC m=+0.050183095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:00:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:00:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:52 np0005464214 podman[294037]: 2025-10-01 14:00:52.194852968 +0000 UTC m=+0.195555201 container init 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:00:52 np0005464214 podman[294037]: 2025-10-01 14:00:52.201435527 +0000 UTC m=+0.202137710 container start 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:00:52 np0005464214 podman[294037]: 2025-10-01 14:00:52.205995961 +0000 UTC m=+0.206698194 container attach 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 10:00:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]: {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:    "0": [
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:        {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "devices": [
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "/dev/loop3"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            ],
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_name": "ceph_lv0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_size": "21470642176",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "name": "ceph_lv0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "tags": {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cluster_name": "ceph",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.crush_device_class": "",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.encrypted": "0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osd_id": "0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.type": "block",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.vdo": "0"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            },
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "type": "block",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "vg_name": "ceph_vg0"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:        }
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:    ],
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:    "1": [
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:        {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "devices": [
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "/dev/loop4"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            ],
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_name": "ceph_lv1",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_size": "21470642176",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "name": "ceph_lv1",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "tags": {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cluster_name": "ceph",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.crush_device_class": "",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.encrypted": "0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osd_id": "1",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.type": "block",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.vdo": "0"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            },
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "type": "block",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "vg_name": "ceph_vg1"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:        }
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:    ],
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:    "2": [
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:        {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "devices": [
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "/dev/loop5"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            ],
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_name": "ceph_lv2",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_size": "21470642176",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "name": "ceph_lv2",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "tags": {
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.cluster_name": "ceph",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.crush_device_class": "",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.encrypted": "0",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osd_id": "2",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.type": "block",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:                "ceph.vdo": "0"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            },
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "type": "block",
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:            "vg_name": "ceph_vg2"
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:        }
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]:    ]
Oct  1 10:00:52 np0005464214 stupefied_kare[294053]: }
Oct  1 10:00:53 np0005464214 systemd[1]: libpod-87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0.scope: Deactivated successfully.
Oct  1 10:00:53 np0005464214 podman[294037]: 2025-10-01 14:00:53.012598383 +0000 UTC m=+1.013300576 container died 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:00:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14-merged.mount: Deactivated successfully.
Oct  1 10:00:53 np0005464214 podman[294037]: 2025-10-01 14:00:53.125123046 +0000 UTC m=+1.125825209 container remove 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 10:00:53 np0005464214 systemd[1]: libpod-conmon-87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0.scope: Deactivated successfully.
Oct  1 10:00:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:53 np0005464214 podman[294215]: 2025-10-01 14:00:53.915741139 +0000 UTC m=+0.045631310 container create fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:00:53 np0005464214 systemd[1]: Started libpod-conmon-fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f.scope.
Oct  1 10:00:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:00:53 np0005464214 podman[294215]: 2025-10-01 14:00:53.896915901 +0000 UTC m=+0.026806112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:00:54 np0005464214 podman[294215]: 2025-10-01 14:00:53.999965113 +0000 UTC m=+0.129855304 container init fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 10:00:54 np0005464214 podman[294215]: 2025-10-01 14:00:54.009426454 +0000 UTC m=+0.139316635 container start fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:00:54 np0005464214 podman[294215]: 2025-10-01 14:00:54.012706768 +0000 UTC m=+0.142596949 container attach fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:00:54 np0005464214 adoring_hodgkin[294231]: 167 167
Oct  1 10:00:54 np0005464214 systemd[1]: libpod-fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f.scope: Deactivated successfully.
Oct  1 10:00:54 np0005464214 podman[294215]: 2025-10-01 14:00:54.017610964 +0000 UTC m=+0.147501145 container died fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:00:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-49d9ea93023b747970eb036b70b020d9e7967872103ea75de2429770178756ee-merged.mount: Deactivated successfully.
Oct  1 10:00:54 np0005464214 podman[294215]: 2025-10-01 14:00:54.063180861 +0000 UTC m=+0.193071052 container remove fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:00:54 np0005464214 systemd[1]: libpod-conmon-fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f.scope: Deactivated successfully.
Oct  1 10:00:54 np0005464214 podman[294254]: 2025-10-01 14:00:54.276123782 +0000 UTC m=+0.056116453 container create 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:00:54 np0005464214 systemd[1]: Started libpod-conmon-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope.
Oct  1 10:00:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:00:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:54 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:00:54 np0005464214 podman[294254]: 2025-10-01 14:00:54.260437293 +0000 UTC m=+0.040429964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:00:54 np0005464214 podman[294254]: 2025-10-01 14:00:54.356218015 +0000 UTC m=+0.136210786 container init 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 10:00:54 np0005464214 podman[294254]: 2025-10-01 14:00:54.362839575 +0000 UTC m=+0.142832256 container start 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 10:00:54 np0005464214 podman[294254]: 2025-10-01 14:00:54.366814581 +0000 UTC m=+0.146807292 container attach 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:00:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1592897968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1592897968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]: {
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "osd_id": 0,
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "type": "bluestore"
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:    },
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "osd_id": 2,
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "type": "bluestore"
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:    },
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "osd_id": 1,
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:        "type": "bluestore"
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]:    }
Oct  1 10:00:55 np0005464214 lucid_banzai[294270]: }
Oct  1 10:00:55 np0005464214 systemd[1]: libpod-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope: Deactivated successfully.
Oct  1 10:00:55 np0005464214 podman[294254]: 2025-10-01 14:00:55.464903228 +0000 UTC m=+1.244895909 container died 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  1 10:00:55 np0005464214 systemd[1]: libpod-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope: Consumed 1.106s CPU time.
Oct  1 10:00:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714-merged.mount: Deactivated successfully.
Oct  1 10:00:55 np0005464214 podman[294254]: 2025-10-01 14:00:55.541371746 +0000 UTC m=+1.321364457 container remove 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 10:00:55 np0005464214 systemd[1]: libpod-conmon-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope: Deactivated successfully.
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:00:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:00:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f9c980b3-138f-40b4-ad1c-3b5f76afaea2 does not exist
Oct  1 10:00:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e3df028d-1e60-4315-b1b7-2763c49c7642 does not exist
Oct  1 10:00:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:00:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:00:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:00:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:00:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Oct  1 10:01:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Oct  1 10:01:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Oct  1 10:01:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:01:12.328 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:01:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:01:12.329 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:01:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:01:12.329 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:01:12 np0005464214 nova_compute[260022]: 2025-10-01 14:01:12.350 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 10:01:13 np0005464214 nova_compute[260022]: 2025-10-01 14:01:13.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 10:01:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:01:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:01:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.371 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.371 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:01:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:01:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229369265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:01:20 np0005464214 nova_compute[260022]: 2025-10-01 14:01:20.814 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.037 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.039 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:01:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.332 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.356 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.357 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.358 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.434 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.466 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.467 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.495 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.514 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 10:01:21 np0005464214 nova_compute[260022]: 2025-10-01 14:01:21.573 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:01:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:01:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987762859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:01:22 np0005464214 nova_compute[260022]: 2025-10-01 14:01:22.032 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:01:22 np0005464214 nova_compute[260022]: 2025-10-01 14:01:22.040 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:01:22 np0005464214 nova_compute[260022]: 2025-10-01 14:01:22.058 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:01:22 np0005464214 nova_compute[260022]: 2025-10-01 14:01:22.061 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:01:22 np0005464214 nova_compute[260022]: 2025-10-01 14:01:22.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:01:22 np0005464214 podman[294422]: 2025-10-01 14:01:22.551865505 +0000 UTC m=+0.093969265 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  1 10:01:22 np0005464214 podman[294424]: 2025-10-01 14:01:22.573789171 +0000 UTC m=+0.109069554 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  1 10:01:22 np0005464214 podman[294421]: 2025-10-01 14:01:22.577829239 +0000 UTC m=+0.121951263 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 10:01:22 np0005464214 podman[294423]: 2025-10-01 14:01:22.58414424 +0000 UTC m=+0.123786752 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 10:01:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:23 np0005464214 nova_compute[260022]: 2025-10-01 14:01:23.064 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct  1 10:01:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:26 np0005464214 nova_compute[260022]: 2025-10-01 14:01:26.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:26 np0005464214 nova_compute[260022]: 2025-10-01 14:01:26.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:27 np0005464214 nova_compute[260022]: 2025-10-01 14:01:27.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:27 np0005464214 nova_compute[260022]: 2025-10-01 14:01:27.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:27 np0005464214 nova_compute[260022]: 2025-10-01 14:01:27.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:01:27 np0005464214 nova_compute[260022]: 2025-10-01 14:01:27.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:01:27 np0005464214 nova_compute[260022]: 2025-10-01 14:01:27.359 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:01:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:30 np0005464214 nova_compute[260022]: 2025-10-01 14:01:30.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:01:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:01:45.590 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:01:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:01:45.592 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:01:46 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:01:46.594 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:01:47
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.data']
Oct  1 10:01:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:01:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:01:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:53 np0005464214 podman[294505]: 2025-10-01 14:01:53.546782183 +0000 UTC m=+0.081904631 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:01:53 np0005464214 podman[294506]: 2025-10-01 14:01:53.562935346 +0000 UTC m=+0.089896785 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:01:53 np0005464214 podman[294512]: 2025-10-01 14:01:53.569306598 +0000 UTC m=+0.092774517 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 10:01:53 np0005464214 podman[294504]: 2025-10-01 14:01:53.575163834 +0000 UTC m=+0.121648443 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:01:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:01:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2419855985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:01:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:01:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2419855985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:01:56 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e380aa91-b1cb-4610-b859-513cefaef386 does not exist
Oct  1 10:01:56 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fe104ba5-041e-425a-8caa-a0be4dd9d106 does not exist
Oct  1 10:01:56 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e1c88eba-54aa-4b88-9fc6-7bdf856d4cc7 does not exist
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:01:56 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:01:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:01:57 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.515095754 +0000 UTC m=+0.063903020 container create af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 10:01:57 np0005464214 systemd[1]: Started libpod-conmon-af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902.scope.
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.491307758 +0000 UTC m=+0.040115024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:01:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.631027975 +0000 UTC m=+0.179835301 container init af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:01:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.640172305 +0000 UTC m=+0.188979561 container start af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.644221174 +0000 UTC m=+0.193028440 container attach af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 10:01:57 np0005464214 heuristic_easley[294872]: 167 167
Oct  1 10:01:57 np0005464214 systemd[1]: libpod-af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902.scope: Deactivated successfully.
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.649683248 +0000 UTC m=+0.198490524 container died af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:01:57 np0005464214 systemd[1]: var-lib-containers-storage-overlay-93a2a284c748c623ce8d7c12a1b7e3f53b3985fced1a7e99b8b20f9c1d05e483-merged.mount: Deactivated successfully.
Oct  1 10:01:57 np0005464214 podman[294856]: 2025-10-01 14:01:57.702646759 +0000 UTC m=+0.251454005 container remove af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:01:57 np0005464214 systemd[1]: libpod-conmon-af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902.scope: Deactivated successfully.
Oct  1 10:01:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:01:57 np0005464214 podman[294896]: 2025-10-01 14:01:57.940027077 +0000 UTC m=+0.069880780 container create 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 10:01:57 np0005464214 systemd[1]: Started libpod-conmon-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope.
Oct  1 10:01:58 np0005464214 podman[294896]: 2025-10-01 14:01:57.911323265 +0000 UTC m=+0.041177008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:01:58 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:01:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:01:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:01:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:01:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:01:58 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:01:58 np0005464214 podman[294896]: 2025-10-01 14:01:58.047305063 +0000 UTC m=+0.177158816 container init 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 10:01:58 np0005464214 podman[294896]: 2025-10-01 14:01:58.061015128 +0000 UTC m=+0.190868831 container start 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:01:58 np0005464214 podman[294896]: 2025-10-01 14:01:58.068864777 +0000 UTC m=+0.198718530 container attach 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 10:01:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:01:59 np0005464214 confident_agnesi[294913]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:01:59 np0005464214 confident_agnesi[294913]: --> relative data size: 1.0
Oct  1 10:01:59 np0005464214 confident_agnesi[294913]: --> All data devices are unavailable
Oct  1 10:01:59 np0005464214 systemd[1]: libpod-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope: Deactivated successfully.
Oct  1 10:01:59 np0005464214 systemd[1]: libpod-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope: Consumed 1.223s CPU time.
Oct  1 10:01:59 np0005464214 podman[294896]: 2025-10-01 14:01:59.327013726 +0000 UTC m=+1.456867419 container died 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 10:01:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1-merged.mount: Deactivated successfully.
Oct  1 10:01:59 np0005464214 podman[294896]: 2025-10-01 14:01:59.434908482 +0000 UTC m=+1.564762155 container remove 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:01:59 np0005464214 systemd[1]: libpod-conmon-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope: Deactivated successfully.
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.327009387 +0000 UTC m=+0.041948683 container create 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 10:02:00 np0005464214 systemd[1]: Started libpod-conmon-837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6.scope.
Oct  1 10:02:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.399251442 +0000 UTC m=+0.114190808 container init 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.310240205 +0000 UTC m=+0.025179501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.408268608 +0000 UTC m=+0.123207884 container start 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:02:00 np0005464214 quirky_wright[295109]: 167 167
Oct  1 10:02:00 np0005464214 systemd[1]: libpod-837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6.scope: Deactivated successfully.
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.415283711 +0000 UTC m=+0.130223087 container attach 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.416273913 +0000 UTC m=+0.131213229 container died 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:02:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-528963a2194888f5aaf48448b3632c7b62377b2fddfc1223354408c2ef574d46-merged.mount: Deactivated successfully.
Oct  1 10:02:00 np0005464214 podman[295093]: 2025-10-01 14:02:00.458115761 +0000 UTC m=+0.173055077 container remove 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct  1 10:02:00 np0005464214 systemd[1]: libpod-conmon-837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6.scope: Deactivated successfully.
Oct  1 10:02:00 np0005464214 podman[295133]: 2025-10-01 14:02:00.675956158 +0000 UTC m=+0.054275015 container create 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:02:00 np0005464214 systemd[1]: Started libpod-conmon-5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d.scope.
Oct  1 10:02:00 np0005464214 podman[295133]: 2025-10-01 14:02:00.653801874 +0000 UTC m=+0.032120721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:02:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:02:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:00 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:00 np0005464214 podman[295133]: 2025-10-01 14:02:00.788214552 +0000 UTC m=+0.166533459 container init 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:02:00 np0005464214 podman[295133]: 2025-10-01 14:02:00.803193647 +0000 UTC m=+0.181512504 container start 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  1 10:02:00 np0005464214 podman[295133]: 2025-10-01 14:02:00.807337699 +0000 UTC m=+0.185656606 container attach 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:02:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]: {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:    "0": [
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:        {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "devices": [
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "/dev/loop3"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            ],
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_name": "ceph_lv0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_size": "21470642176",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "name": "ceph_lv0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "tags": {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cluster_name": "ceph",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.crush_device_class": "",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.encrypted": "0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osd_id": "0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.type": "block",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.vdo": "0"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            },
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "type": "block",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "vg_name": "ceph_vg0"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:        }
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:    ],
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:    "1": [
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:        {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "devices": [
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "/dev/loop4"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            ],
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_name": "ceph_lv1",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_size": "21470642176",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "name": "ceph_lv1",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "tags": {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cluster_name": "ceph",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.crush_device_class": "",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.encrypted": "0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osd_id": "1",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.type": "block",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.vdo": "0"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            },
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "type": "block",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "vg_name": "ceph_vg1"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:        }
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:    ],
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:    "2": [
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:        {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "devices": [
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "/dev/loop5"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            ],
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_name": "ceph_lv2",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_size": "21470642176",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "name": "ceph_lv2",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "tags": {
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.cluster_name": "ceph",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.crush_device_class": "",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.encrypted": "0",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osd_id": "2",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.type": "block",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:                "ceph.vdo": "0"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            },
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "type": "block",
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:            "vg_name": "ceph_vg2"
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:        }
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]:    ]
Oct  1 10:02:01 np0005464214 relaxed_archimedes[295150]: }
Oct  1 10:02:01 np0005464214 systemd[1]: libpod-5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d.scope: Deactivated successfully.
Oct  1 10:02:01 np0005464214 podman[295133]: 2025-10-01 14:02:01.611293716 +0000 UTC m=+0.989612573 container died 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:02:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555-merged.mount: Deactivated successfully.
Oct  1 10:02:01 np0005464214 podman[295133]: 2025-10-01 14:02:01.684575463 +0000 UTC m=+1.062894290 container remove 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:02:01 np0005464214 systemd[1]: libpod-conmon-5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d.scope: Deactivated successfully.
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.511892942 +0000 UTC m=+0.071539792 container create 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:02:02 np0005464214 systemd[1]: Started libpod-conmon-10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36.scope.
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.483686736 +0000 UTC m=+0.043333636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:02:02 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.616956308 +0000 UTC m=+0.176603168 container init 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.627903616 +0000 UTC m=+0.187550476 container start 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.63213411 +0000 UTC m=+0.191781020 container attach 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 10:02:02 np0005464214 loving_zhukovsky[295328]: 167 167
Oct  1 10:02:02 np0005464214 systemd[1]: libpod-10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36.scope: Deactivated successfully.
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.635445685 +0000 UTC m=+0.195092545 container died 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  1 10:02:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-61ceb3f6aeb8cb3308c67e57ebd9535f7d3b7d307a092bbe643ceca3bf088d49-merged.mount: Deactivated successfully.
Oct  1 10:02:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:02 np0005464214 podman[295312]: 2025-10-01 14:02:02.786578104 +0000 UTC m=+0.346224964 container remove 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 10:02:02 np0005464214 systemd[1]: libpod-conmon-10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36.scope: Deactivated successfully.
Oct  1 10:02:02 np0005464214 podman[295354]: 2025-10-01 14:02:02.969659367 +0000 UTC m=+0.053935173 container create 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 10:02:03 np0005464214 systemd[1]: Started libpod-conmon-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope.
Oct  1 10:02:03 np0005464214 podman[295354]: 2025-10-01 14:02:02.946039037 +0000 UTC m=+0.030314933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:02:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:02:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:02:03 np0005464214 podman[295354]: 2025-10-01 14:02:03.082548371 +0000 UTC m=+0.166824217 container init 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:02:03 np0005464214 podman[295354]: 2025-10-01 14:02:03.094393277 +0000 UTC m=+0.178669113 container start 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:02:03 np0005464214 podman[295354]: 2025-10-01 14:02:03.098413895 +0000 UTC m=+0.182689761 container attach 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 10:02:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:04 np0005464214 funny_wiles[295370]: {
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "osd_id": 0,
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "type": "bluestore"
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:    },
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "osd_id": 2,
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "type": "bluestore"
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:    },
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "osd_id": 1,
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:        "type": "bluestore"
Oct  1 10:02:04 np0005464214 funny_wiles[295370]:    }
Oct  1 10:02:04 np0005464214 funny_wiles[295370]: }
Oct  1 10:02:04 np0005464214 systemd[1]: libpod-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope: Deactivated successfully.
Oct  1 10:02:04 np0005464214 podman[295354]: 2025-10-01 14:02:04.26290584 +0000 UTC m=+1.347181686 container died 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:02:04 np0005464214 systemd[1]: libpod-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope: Consumed 1.179s CPU time.
Oct  1 10:02:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d-merged.mount: Deactivated successfully.
Oct  1 10:02:04 np0005464214 podman[295354]: 2025-10-01 14:02:04.338324175 +0000 UTC m=+1.422600021 container remove 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:02:04 np0005464214 systemd[1]: libpod-conmon-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope: Deactivated successfully.
Oct  1 10:02:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:02:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:02:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:02:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:02:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0bda8f74-d870-4364-a23b-99e0276717ed does not exist
Oct  1 10:02:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7097a772-e1e7-4cbb-b42c-72deffc99bac does not exist
Oct  1 10:02:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:02:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:02:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:12.329 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:02:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:02:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:12.332 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:02:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:13 np0005464214 nova_compute[260022]: 2025-10-01 14:02:13.348 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:02:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:02:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.485 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.485 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.486 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.486 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.486 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:02:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:02:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804693010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:02:21 np0005464214 nova_compute[260022]: 2025-10-01 14:02:21.953 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.172 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.174 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.175 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.176 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.359 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.393 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.448 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:02:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:02:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242345982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.925 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.930 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.943 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.944 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:02:22 np0005464214 nova_compute[260022]: 2025-10-01 14:02:22.944 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:02:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:24 np0005464214 podman[295513]: 2025-10-01 14:02:24.516641316 +0000 UTC m=+0.065483911 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  1 10:02:24 np0005464214 podman[295514]: 2025-10-01 14:02:24.535516655 +0000 UTC m=+0.070647473 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  1 10:02:24 np0005464214 podman[295515]: 2025-10-01 14:02:24.54037814 +0000 UTC m=+0.085350442 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  1 10:02:24 np0005464214 podman[295512]: 2025-10-01 14:02:24.551437621 +0000 UTC m=+0.099973276 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:02:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.941 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.941 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.942 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.942 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.980 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.981 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:28 np0005464214 nova_compute[260022]: 2025-10-01 14:02:28.981 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.905 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:48:e1 10.100.0.2 2001:db8::f816:3eff:fe14:48e1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe14:48e1/64', 'neutron:device_id': 'ovnmeta-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bf38cee-9f0a-4197-9a6f-788e9a83e343, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=08585e9b-5812-4d5d-a480-669a92c443db) old=Port_Binding(mac=['fa:16:3e:14:48:e1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:02:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.906 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 08585e9b-5812-4d5d-a480-669a92c443db in datapath 83553c01-35f0-4f4a-9abd-9fde4d9e3ae3 updated#033[00m
Oct  1 10:02:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.907 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83553c01-35f0-4f4a-9abd-9fde4d9e3ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:02:30 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.908 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[922955a2-bada-4761-8583-24e22deeda9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:02:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:31 np0005464214 nova_compute[260022]: 2025-10-01 14:02:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:40 np0005464214 nova_compute[260022]: 2025-10-01 14:02:40.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:02:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:46 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:46.778 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:02:46 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:46.782 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:02:47
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms', '.rgw.root', 'volumes', 'backups', 'default.rgw.log']
Oct  1 10:02:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:02:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:02:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:53 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:02:53.785 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:02:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:02:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2266405777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:02:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:02:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2266405777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:02:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:55 np0005464214 podman[295593]: 2025-10-01 14:02:55.508448891 +0000 UTC m=+0.063221349 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Oct  1 10:02:55 np0005464214 podman[295594]: 2025-10-01 14:02:55.514276426 +0000 UTC m=+0.065130950 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:02:55 np0005464214 podman[295592]: 2025-10-01 14:02:55.534366064 +0000 UTC m=+0.094561384 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  1 10:02:55 np0005464214 podman[295601]: 2025-10-01 14:02:55.540398745 +0000 UTC m=+0.077555294 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:02:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:02:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:02:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:03:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:05 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:03:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:05 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ccd67441-5b3f-4df8-98dd-6c17159a0637 does not exist
Oct  1 10:03:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a03f3085-caee-4548-85b8-1b9f99a96015 does not exist
Oct  1 10:03:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3ac48bb6-59ac-428b-b96c-288ac6137178 does not exist
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 10:03:06 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:06.986443809 +0000 UTC m=+0.020591285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:07.104682313 +0000 UTC m=+0.138829779 container create a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:03:07 np0005464214 systemd[1]: Started libpod-conmon-a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a.scope.
Oct  1 10:03:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:07.424838039 +0000 UTC m=+0.458985555 container init a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:07.438667148 +0000 UTC m=+0.472814614 container start a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:03:07 np0005464214 ecstatic_rubin[296084]: 167 167
Oct  1 10:03:07 np0005464214 systemd[1]: libpod-a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a.scope: Deactivated successfully.
Oct  1 10:03:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:07.563047217 +0000 UTC m=+0.597194683 container attach a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:07.564351118 +0000 UTC m=+0.598498584 container died a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:03:07 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ff4c9c78f6de8d5b03db5ec1f8acdfd5e30d2a47dc3ab93a1c58e2f45327131d-merged.mount: Deactivated successfully.
Oct  1 10:03:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:07 np0005464214 podman[296068]: 2025-10-01 14:03:07.920475347 +0000 UTC m=+0.954622813 container remove a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:03:07 np0005464214 systemd[1]: libpod-conmon-a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a.scope: Deactivated successfully.
Oct  1 10:03:08 np0005464214 podman[296109]: 2025-10-01 14:03:08.198873457 +0000 UTC m=+0.096263009 container create 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 10:03:08 np0005464214 podman[296109]: 2025-10-01 14:03:08.14353988 +0000 UTC m=+0.040929492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:03:08 np0005464214 systemd[1]: Started libpod-conmon-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope.
Oct  1 10:03:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:03:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:08 np0005464214 podman[296109]: 2025-10-01 14:03:08.437496793 +0000 UTC m=+0.334886405 container init 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:03:08 np0005464214 podman[296109]: 2025-10-01 14:03:08.450365151 +0000 UTC m=+0.347754703 container start 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:03:08 np0005464214 podman[296109]: 2025-10-01 14:03:08.520470057 +0000 UTC m=+0.417859669 container attach 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 10:03:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:09 np0005464214 festive_gould[296125]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:03:09 np0005464214 festive_gould[296125]: --> relative data size: 1.0
Oct  1 10:03:09 np0005464214 festive_gould[296125]: --> All data devices are unavailable
Oct  1 10:03:09 np0005464214 systemd[1]: libpod-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope: Deactivated successfully.
Oct  1 10:03:09 np0005464214 systemd[1]: libpod-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope: Consumed 1.026s CPU time.
Oct  1 10:03:09 np0005464214 podman[296109]: 2025-10-01 14:03:09.529073553 +0000 UTC m=+1.426463145 container died 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:03:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9-merged.mount: Deactivated successfully.
Oct  1 10:03:09 np0005464214 podman[296109]: 2025-10-01 14:03:09.928887518 +0000 UTC m=+1.826277040 container remove 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:03:09 np0005464214 systemd[1]: libpod-conmon-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope: Deactivated successfully.
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.687618269 +0000 UTC m=+0.053622944 container create 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:03:10 np0005464214 systemd[1]: Started libpod-conmon-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope.
Oct  1 10:03:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.662312156 +0000 UTC m=+0.028316901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.770049256 +0000 UTC m=+0.136054011 container init 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.781915053 +0000 UTC m=+0.147919718 container start 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.785948541 +0000 UTC m=+0.151953236 container attach 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:03:10 np0005464214 pedantic_solomon[296323]: 167 167
Oct  1 10:03:10 np0005464214 systemd[1]: libpod-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope: Deactivated successfully.
Oct  1 10:03:10 np0005464214 conmon[296323]: conmon 42fcb877a006e778629c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope/container/memory.events
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.791982473 +0000 UTC m=+0.157987128 container died 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:03:10 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5a93008880adc9edede4d2a1de99bf57c903728149dfdbd4d46859a86c11602b-merged.mount: Deactivated successfully.
Oct  1 10:03:10 np0005464214 podman[296307]: 2025-10-01 14:03:10.836981152 +0000 UTC m=+0.202985807 container remove 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:03:10 np0005464214 systemd[1]: libpod-conmon-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope: Deactivated successfully.
Oct  1 10:03:11 np0005464214 podman[296345]: 2025-10-01 14:03:11.034109451 +0000 UTC m=+0.068985761 container create f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 10:03:11 np0005464214 systemd[1]: Started libpod-conmon-f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8.scope.
Oct  1 10:03:11 np0005464214 podman[296345]: 2025-10-01 14:03:11.007414593 +0000 UTC m=+0.042290953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:03:11 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:03:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:11 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:11 np0005464214 podman[296345]: 2025-10-01 14:03:11.160662469 +0000 UTC m=+0.195538819 container init f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:03:11 np0005464214 podman[296345]: 2025-10-01 14:03:11.1742463 +0000 UTC m=+0.209122610 container start f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 10:03:11 np0005464214 podman[296345]: 2025-10-01 14:03:11.178391442 +0000 UTC m=+0.213267812 container attach f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  1 10:03:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]: {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:    "0": [
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:        {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "devices": [
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "/dev/loop3"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            ],
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_name": "ceph_lv0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_size": "21470642176",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "name": "ceph_lv0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "tags": {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cluster_name": "ceph",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.crush_device_class": "",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.encrypted": "0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osd_id": "0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.type": "block",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.vdo": "0"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            },
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "type": "block",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "vg_name": "ceph_vg0"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:        }
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:    ],
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:    "1": [
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:        {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "devices": [
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "/dev/loop4"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            ],
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_name": "ceph_lv1",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_size": "21470642176",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "name": "ceph_lv1",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "tags": {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cluster_name": "ceph",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.crush_device_class": "",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.encrypted": "0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osd_id": "1",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.type": "block",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.vdo": "0"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            },
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "type": "block",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "vg_name": "ceph_vg1"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:        }
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:    ],
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:    "2": [
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:        {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "devices": [
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "/dev/loop5"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            ],
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_name": "ceph_lv2",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_size": "21470642176",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "name": "ceph_lv2",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "tags": {
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.cluster_name": "ceph",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.crush_device_class": "",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.encrypted": "0",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osd_id": "2",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.type": "block",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:                "ceph.vdo": "0"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            },
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "type": "block",
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:            "vg_name": "ceph_vg2"
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:        }
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]:    ]
Oct  1 10:03:11 np0005464214 beautiful_hoover[296361]: }
Oct  1 10:03:11 np0005464214 systemd[1]: libpod-f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8.scope: Deactivated successfully.
Oct  1 10:03:11 np0005464214 podman[296345]: 2025-10-01 14:03:11.975699108 +0000 UTC m=+1.010575388 container died f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:03:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e-merged.mount: Deactivated successfully.
Oct  1 10:03:12 np0005464214 podman[296345]: 2025-10-01 14:03:12.031054956 +0000 UTC m=+1.065931236 container remove f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:03:12 np0005464214 systemd[1]: libpod-conmon-f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8.scope: Deactivated successfully.
Oct  1 10:03:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:12.330 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:03:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:03:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.759709482 +0000 UTC m=+0.057035883 container create 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 10:03:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:12 np0005464214 systemd[1]: Started libpod-conmon-4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f.scope.
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.733413466 +0000 UTC m=+0.030739907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:03:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.862925899 +0000 UTC m=+0.160252290 container init 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.868495546 +0000 UTC m=+0.165821897 container start 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.872057489 +0000 UTC m=+0.169383920 container attach 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:03:12 np0005464214 elegant_franklin[296540]: 167 167
Oct  1 10:03:12 np0005464214 systemd[1]: libpod-4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f.scope: Deactivated successfully.
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.877538763 +0000 UTC m=+0.174865154 container died 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 10:03:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-443c39f2b96bfa6182a6ca802699b0e25d391a5d5c192e4db60b3a1c340be422-merged.mount: Deactivated successfully.
Oct  1 10:03:12 np0005464214 podman[296524]: 2025-10-01 14:03:12.928401697 +0000 UTC m=+0.225728078 container remove 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 10:03:12 np0005464214 systemd[1]: libpod-conmon-4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f.scope: Deactivated successfully.
Oct  1 10:03:13 np0005464214 podman[296564]: 2025-10-01 14:03:13.123326437 +0000 UTC m=+0.056313579 container create 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 10:03:13 np0005464214 systemd[1]: Started libpod-conmon-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope.
Oct  1 10:03:13 np0005464214 podman[296564]: 2025-10-01 14:03:13.094368677 +0000 UTC m=+0.027355869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:03:13 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:03:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:03:13 np0005464214 podman[296564]: 2025-10-01 14:03:13.241962084 +0000 UTC m=+0.174949266 container init 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:03:13 np0005464214 podman[296564]: 2025-10-01 14:03:13.25758744 +0000 UTC m=+0.190574572 container start 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 10:03:13 np0005464214 podman[296564]: 2025-10-01 14:03:13.261440812 +0000 UTC m=+0.194427914 container attach 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 10:03:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:14 np0005464214 keen_hugle[296581]: {
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "osd_id": 0,
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "type": "bluestore"
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:    },
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "osd_id": 2,
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "type": "bluestore"
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:    },
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "osd_id": 1,
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:        "type": "bluestore"
Oct  1 10:03:14 np0005464214 keen_hugle[296581]:    }
Oct  1 10:03:14 np0005464214 keen_hugle[296581]: }
Oct  1 10:03:14 np0005464214 systemd[1]: libpod-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope: Deactivated successfully.
Oct  1 10:03:14 np0005464214 systemd[1]: libpod-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope: Consumed 1.055s CPU time.
Oct  1 10:03:14 np0005464214 podman[296614]: 2025-10-01 14:03:14.35989158 +0000 UTC m=+0.034701872 container died 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 10:03:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261-merged.mount: Deactivated successfully.
Oct  1 10:03:14 np0005464214 podman[296614]: 2025-10-01 14:03:14.432473155 +0000 UTC m=+0.107283417 container remove 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 10:03:14 np0005464214 systemd[1]: libpod-conmon-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope: Deactivated successfully.
Oct  1 10:03:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:03:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:03:14 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bc77edc4-fd59-4386-aede-f8a509f44582 does not exist
Oct  1 10:03:14 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2b625305-b996-43b3-8f82-a85906c6b229 does not exist
Oct  1 10:03:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:15 np0005464214 nova_compute[260022]: 2025-10-01 14:03:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:15 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:03:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:03:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.433 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.433 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.434 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.434 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.435 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:03:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:03:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190491000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:03:21 np0005464214 nova_compute[260022]: 2025-10-01 14:03:21.912 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.079 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.080 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.080 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.081 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.228 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.293 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.293 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.294 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.363 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:03:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:03:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019550469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.860 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.866 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.897 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.899 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:03:22 np0005464214 nova_compute[260022]: 2025-10-01 14:03:22.900 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:03:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:24 np0005464214 nova_compute[260022]: 2025-10-01 14:03:24.901 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:26 np0005464214 podman[296726]: 2025-10-01 14:03:26.532661981 +0000 UTC m=+0.066440391 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  1 10:03:26 np0005464214 podman[296724]: 2025-10-01 14:03:26.542216915 +0000 UTC m=+0.081634324 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:03:26 np0005464214 podman[296725]: 2025-10-01 14:03:26.542616256 +0000 UTC m=+0.074312830 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 10:03:26 np0005464214 podman[296723]: 2025-10-01 14:03:26.573837638 +0000 UTC m=+0.119729153 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:03:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:27 np0005464214 nova_compute[260022]: 2025-10-01 14:03:27.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:28 np0005464214 nova_compute[260022]: 2025-10-01 14:03:28.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:29 np0005464214 nova_compute[260022]: 2025-10-01 14:03:29.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:29 np0005464214 nova_compute[260022]: 2025-10-01 14:03:29.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:03:29 np0005464214 nova_compute[260022]: 2025-10-01 14:03:29.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:03:29 np0005464214 nova_compute[260022]: 2025-10-01 14:03:29.503 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:03:29 np0005464214 nova_compute[260022]: 2025-10-01 14:03:29.503 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:31 np0005464214 nova_compute[260022]: 2025-10-01 14:03:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:03:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:34 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:34.380 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:03:34 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:34.381 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:03:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:39.384 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:03:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.488 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:f3:83 2001:db8:0:1:f816:3eff:fe7a:f383 2001:db8::f816:3eff:fe7a:f383'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7a:f383/64 2001:db8::f816:3eff:fe7a:f383/64', 'neutron:device_id': 'ovnmeta-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7aeb023-eb42-4942-80f5-14a39f62d9bf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=75887985-9cf4-4a14-8823-578c8c134e7d) old=Port_Binding(mac=['fa:16:3e:7a:f3:83 2001:db8::f816:3eff:fe7a:f383'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7a:f383/64', 'neutron:device_id': 'ovnmeta-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:03:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.489 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 75887985-9cf4-4a14-8823-578c8c134e7d in datapath 63c03399-cac3-4361-81d6-fd2f133d14dc updated#033[00m
Oct  1 10:03:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.490 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 63c03399-cac3-4361-81d6-fd2f133d14dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:03:43 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.492 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cbdf0f-2030-41c1-842e-30bbc8a1961f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:03:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:03:47
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Oct  1 10:03:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:03:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:03:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:03:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3897872834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:03:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:03:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3897872834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:03:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:03:57 np0005464214 podman[296802]: 2025-10-01 14:03:57.508796994 +0000 UTC m=+0.058535769 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:03:57 np0005464214 podman[296809]: 2025-10-01 14:03:57.52531087 +0000 UTC m=+0.061207316 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  1 10:03:57 np0005464214 podman[296801]: 2025-10-01 14:03:57.530085361 +0000 UTC m=+0.083292476 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:03:57 np0005464214 podman[296803]: 2025-10-01 14:03:57.557512722 +0000 UTC m=+0.096471664 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:03:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:03:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:03:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:12.330 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:04:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:04:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:04:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.693 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:ee:5b 2001:db8:0:1:f816:3eff:fe0f:ee5b 2001:db8::f816:3eff:fe0f:ee5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0f:ee5b/64 2001:db8::f816:3eff:fe0f:ee5b/64', 'neutron:device_id': 'ovnmeta-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cfde4a39-5828-4f9a-8a92-23d6b4d71d7c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09c983f0-ec35-40f6-b974-4b6581d9c9e3) old=Port_Binding(mac=['fa:16:3e:0f:ee:5b 2001:db8::f816:3eff:fe0f:ee5b'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe0f:ee5b/64', 'neutron:device_id': 'ovnmeta-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:04:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.695 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09c983f0-ec35-40f6-b974-4b6581d9c9e3 in datapath 1d6028c0-c737-4798-8468-d69b94cf6fb7 updated#033[00m
Oct  1 10:04:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.697 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d6028c0-c737-4798-8468-d69b94cf6fb7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:04:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.698 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[475ece58-d0e1-41cc-851c-74693663746b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:04:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:15 np0005464214 nova_compute[260022]: 2025-10-01 14:04:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:15 np0005464214 podman[297052]: 2025-10-01 14:04:15.604694713 +0000 UTC m=+0.173568062 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 10:04:15 np0005464214 podman[297073]: 2025-10-01 14:04:15.870986409 +0000 UTC m=+0.079291529 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:04:15 np0005464214 podman[297052]: 2025-10-01 14:04:15.932300486 +0000 UTC m=+0.501173845 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 10:04:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:04:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:04:17 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:04:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 34bc4630-8a73-4f66-946a-70e2281fdadf does not exist
Oct  1 10:04:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9188ac65-de96-49ac-aeea-998141cf8c54 does not exist
Oct  1 10:04:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev efac1cd4-e74f-4f4e-9b3d-4b1b4c4b0d47 does not exist
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:18 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:04:18 np0005464214 podman[297488]: 2025-10-01 14:04:18.834198846 +0000 UTC m=+0.032512113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:04:19 np0005464214 podman[297488]: 2025-10-01 14:04:19.259254282 +0000 UTC m=+0.457567449 container create a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 10:04:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:04:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:19 np0005464214 systemd[1]: Started libpod-conmon-a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c.scope.
Oct  1 10:04:19 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:04:19 np0005464214 podman[297488]: 2025-10-01 14:04:19.867485754 +0000 UTC m=+1.065799021 container init a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:04:19 np0005464214 podman[297488]: 2025-10-01 14:04:19.879061772 +0000 UTC m=+1.077374989 container start a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:04:19 np0005464214 vibrant_mayer[297504]: 167 167
Oct  1 10:04:19 np0005464214 systemd[1]: libpod-a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c.scope: Deactivated successfully.
Oct  1 10:04:19 np0005464214 podman[297488]: 2025-10-01 14:04:19.950542451 +0000 UTC m=+1.148855658 container attach a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:04:19 np0005464214 podman[297488]: 2025-10-01 14:04:19.951930626 +0000 UTC m=+1.150243843 container died a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 10:04:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f4fa94b59ee3324036bfb21683f43046989f1a7931af88f2803eb4b3991b127f-merged.mount: Deactivated successfully.
Oct  1 10:04:20 np0005464214 podman[297488]: 2025-10-01 14:04:20.903923954 +0000 UTC m=+2.102237171 container remove a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:04:20 np0005464214 systemd[1]: libpod-conmon-a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c.scope: Deactivated successfully.
Oct  1 10:04:21 np0005464214 podman[297528]: 2025-10-01 14:04:21.2014078 +0000 UTC m=+0.111592496 container create b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  1 10:04:21 np0005464214 podman[297528]: 2025-10-01 14:04:21.129421073 +0000 UTC m=+0.039605809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:04:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:21 np0005464214 systemd[1]: Started libpod-conmon-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope.
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:04:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:04:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:04:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.369 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:04:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:21 np0005464214 podman[297528]: 2025-10-01 14:04:21.397335661 +0000 UTC m=+0.307520427 container init b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 10:04:21 np0005464214 podman[297528]: 2025-10-01 14:04:21.412232433 +0000 UTC m=+0.322417139 container start b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:04:21 np0005464214 podman[297528]: 2025-10-01 14:04:21.425115133 +0000 UTC m=+0.335299869 container attach b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:04:21 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:04:21 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109430700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.806 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.987 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.988 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5009MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.989 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:04:21 np0005464214 nova_compute[260022]: 2025-10-01 14:04:21.989 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.130 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.146 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.163 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance ed1a583a-b018-407d-9bb0-31b0d7eca6fd has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.163 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.164 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.238 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:04:22 np0005464214 naughty_black[297545]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:04:22 np0005464214 naughty_black[297545]: --> relative data size: 1.0
Oct  1 10:04:22 np0005464214 naughty_black[297545]: --> All data devices are unavailable
Oct  1 10:04:22 np0005464214 systemd[1]: libpod-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope: Deactivated successfully.
Oct  1 10:04:22 np0005464214 podman[297528]: 2025-10-01 14:04:22.597651243 +0000 UTC m=+1.507835989 container died b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct  1 10:04:22 np0005464214 systemd[1]: libpod-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope: Consumed 1.117s CPU time.
Oct  1 10:04:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:04:22 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065319070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.696 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.707 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.731 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.734 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:04:22 np0005464214 nova_compute[260022]: 2025-10-01 14:04:22.735 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:04:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734-merged.mount: Deactivated successfully.
Oct  1 10:04:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:23 np0005464214 podman[297528]: 2025-10-01 14:04:23.815952067 +0000 UTC m=+2.726136773 container remove b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 10:04:23 np0005464214 systemd[1]: libpod-conmon-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope: Deactivated successfully.
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.389552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464389645, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3460883, "memory_usage": 3516296, "flush_reason": "Manual Compaction"}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464671816, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3383793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35113, "largest_seqno": 37171, "table_properties": {"data_size": 3374430, "index_size": 5921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18755, "raw_average_key_size": 20, "raw_value_size": 3355818, "raw_average_value_size": 3592, "num_data_blocks": 263, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327237, "oldest_key_time": 1759327237, "file_creation_time": 1759327464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 282321 microseconds, and 12836 cpu microseconds.
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.671881) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3383793 bytes OK
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.671906) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.690985) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.691015) EVENT_LOG_v1 {"time_micros": 1759327464691005, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.691041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3452243, prev total WAL file size 3452243, number of live WAL files 2.
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.692571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3304KB)], [80(7322KB)]
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464692624, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10881903, "oldest_snapshot_seqno": -1}
Oct  1 10:04:24 np0005464214 podman[297770]: 2025-10-01 14:04:24.642849701 +0000 UTC m=+0.037001046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:04:24 np0005464214 podman[297770]: 2025-10-01 14:04:24.82953818 +0000 UTC m=+0.223689525 container create 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5809 keys, 9138023 bytes, temperature: kUnknown
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464842125, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9138023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9098712, "index_size": 23713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 146593, "raw_average_key_size": 25, "raw_value_size": 8993199, "raw_average_value_size": 1548, "num_data_blocks": 967, "num_entries": 5809, "num_filter_entries": 5809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.842354) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9138023 bytes
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.978809) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.8 rd, 61.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6323, records dropped: 514 output_compression: NoCompression
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.978861) EVENT_LOG_v1 {"time_micros": 1759327464978841, "job": 46, "event": "compaction_finished", "compaction_time_micros": 149572, "compaction_time_cpu_micros": 37015, "output_level": 6, "num_output_files": 1, "total_output_size": 9138023, "num_input_records": 6323, "num_output_records": 5809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464980066, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464982451, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.692447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:04:24 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:04:25 np0005464214 systemd[1]: Started libpod-conmon-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope.
Oct  1 10:04:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:04:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:25 np0005464214 podman[297770]: 2025-10-01 14:04:25.563427861 +0000 UTC m=+0.957579246 container init 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:04:25 np0005464214 podman[297770]: 2025-10-01 14:04:25.581244497 +0000 UTC m=+0.975395842 container start 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:04:25 np0005464214 priceless_albattani[297786]: 167 167
Oct  1 10:04:25 np0005464214 systemd[1]: libpod-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope: Deactivated successfully.
Oct  1 10:04:25 np0005464214 conmon[297786]: conmon 66308f67a2757ee337ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope/container/memory.events
Oct  1 10:04:25 np0005464214 nova_compute[260022]: 2025-10-01 14:04:25.736 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:25 np0005464214 podman[297770]: 2025-10-01 14:04:25.913260019 +0000 UTC m=+1.307411414 container attach 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 10:04:25 np0005464214 podman[297770]: 2025-10-01 14:04:25.915021715 +0000 UTC m=+1.309173110 container died 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:04:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ed78117063b5828c81d28d46912a2914c6353569f16cb46dae5405f8449f8bcd-merged.mount: Deactivated successfully.
Oct  1 10:04:26 np0005464214 podman[297770]: 2025-10-01 14:04:26.507106515 +0000 UTC m=+1.901257850 container remove 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:04:26 np0005464214 systemd[1]: libpod-conmon-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope: Deactivated successfully.
Oct  1 10:04:26 np0005464214 podman[297810]: 2025-10-01 14:04:26.723181456 +0000 UTC m=+0.044510124 container create 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:04:26 np0005464214 systemd[1]: Started libpod-conmon-5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146.scope.
Oct  1 10:04:26 np0005464214 podman[297810]: 2025-10-01 14:04:26.702963184 +0000 UTC m=+0.024291872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:04:26 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:04:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:26 np0005464214 podman[297810]: 2025-10-01 14:04:26.82347548 +0000 UTC m=+0.144804168 container init 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:04:26 np0005464214 podman[297810]: 2025-10-01 14:04:26.829997368 +0000 UTC m=+0.151326046 container start 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct  1 10:04:26 np0005464214 podman[297810]: 2025-10-01 14:04:26.833892622 +0000 UTC m=+0.155221320 container attach 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 10:04:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:27 np0005464214 modest_davinci[297827]: {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:    "0": [
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:        {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "devices": [
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "/dev/loop3"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            ],
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_name": "ceph_lv0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_size": "21470642176",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "name": "ceph_lv0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "tags": {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cluster_name": "ceph",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.crush_device_class": "",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.encrypted": "0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osd_id": "0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.type": "block",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.vdo": "0"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            },
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "type": "block",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "vg_name": "ceph_vg0"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:        }
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:    ],
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:    "1": [
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:        {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "devices": [
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "/dev/loop4"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            ],
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_name": "ceph_lv1",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_size": "21470642176",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "name": "ceph_lv1",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "tags": {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cluster_name": "ceph",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.crush_device_class": "",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.encrypted": "0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osd_id": "1",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.type": "block",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.vdo": "0"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            },
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "type": "block",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "vg_name": "ceph_vg1"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:        }
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:    ],
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:    "2": [
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:        {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "devices": [
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "/dev/loop5"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            ],
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_name": "ceph_lv2",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_size": "21470642176",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "name": "ceph_lv2",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "tags": {
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.cluster_name": "ceph",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.crush_device_class": "",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.encrypted": "0",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osd_id": "2",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.type": "block",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:                "ceph.vdo": "0"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            },
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "type": "block",
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:            "vg_name": "ceph_vg2"
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:        }
Oct  1 10:04:27 np0005464214 modest_davinci[297827]:    ]
Oct  1 10:04:27 np0005464214 modest_davinci[297827]: }
Oct  1 10:04:27 np0005464214 systemd[1]: libpod-5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146.scope: Deactivated successfully.
Oct  1 10:04:27 np0005464214 podman[297810]: 2025-10-01 14:04:27.643662223 +0000 UTC m=+0.964990921 container died 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:04:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb-merged.mount: Deactivated successfully.
Oct  1 10:04:27 np0005464214 podman[297810]: 2025-10-01 14:04:27.703893956 +0000 UTC m=+1.025222624 container remove 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 10:04:27 np0005464214 systemd[1]: libpod-conmon-5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146.scope: Deactivated successfully.
Oct  1 10:04:27 np0005464214 podman[297844]: 2025-10-01 14:04:27.757004142 +0000 UTC m=+0.075283261 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 10:04:27 np0005464214 podman[297845]: 2025-10-01 14:04:27.758015574 +0000 UTC m=+0.082381647 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid)
Oct  1 10:04:27 np0005464214 podman[297837]: 2025-10-01 14:04:27.759090608 +0000 UTC m=+0.088310105 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:04:27 np0005464214 podman[297846]: 2025-10-01 14:04:27.786496998 +0000 UTC m=+0.097268809 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 10:04:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:28 np0005464214 nova_compute[260022]: 2025-10-01 14:04:28.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.427643925 +0000 UTC m=+0.049236903 container create c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:04:28 np0005464214 systemd[1]: Started libpod-conmon-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope.
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.405005257 +0000 UTC m=+0.026598265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:04:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.531540994 +0000 UTC m=+0.153134042 container init c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.542925876 +0000 UTC m=+0.164518884 container start c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.546792579 +0000 UTC m=+0.168385587 container attach c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:04:28 np0005464214 agitated_merkle[298084]: 167 167
Oct  1 10:04:28 np0005464214 systemd[1]: libpod-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope: Deactivated successfully.
Oct  1 10:04:28 np0005464214 conmon[298084]: conmon c861c14ce59cf720a610 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope/container/memory.events
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.548526204 +0000 UTC m=+0.170119172 container died c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:04:28 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f520a843bf83b957aa10ca3d26ef4ed98426b5e83adc0ab8b877291f37453c60-merged.mount: Deactivated successfully.
Oct  1 10:04:28 np0005464214 podman[298067]: 2025-10-01 14:04:28.584687202 +0000 UTC m=+0.206280170 container remove c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:04:28 np0005464214 systemd[1]: libpod-conmon-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope: Deactivated successfully.
Oct  1 10:04:28 np0005464214 podman[298107]: 2025-10-01 14:04:28.764099939 +0000 UTC m=+0.040431454 container create 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:04:28 np0005464214 systemd[1]: Started libpod-conmon-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope.
Oct  1 10:04:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:04:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:28 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:04:28 np0005464214 podman[298107]: 2025-10-01 14:04:28.838099269 +0000 UTC m=+0.114430844 container init 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 10:04:28 np0005464214 podman[298107]: 2025-10-01 14:04:28.747925865 +0000 UTC m=+0.024257400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:04:28 np0005464214 podman[298107]: 2025-10-01 14:04:28.847893259 +0000 UTC m=+0.124224774 container start 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 10:04:28 np0005464214 podman[298107]: 2025-10-01 14:04:28.851552576 +0000 UTC m=+0.127884111 container attach 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:04:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:29 np0005464214 nova_compute[260022]: 2025-10-01 14:04:29.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]: {
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "osd_id": 0,
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "type": "bluestore"
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:    },
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "osd_id": 2,
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "type": "bluestore"
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:    },
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "osd_id": 1,
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:        "type": "bluestore"
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]:    }
Oct  1 10:04:29 np0005464214 affectionate_dirac[298124]: }
Oct  1 10:04:29 np0005464214 systemd[1]: libpod-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope: Deactivated successfully.
Oct  1 10:04:29 np0005464214 podman[298107]: 2025-10-01 14:04:29.865860902 +0000 UTC m=+1.142192427 container died 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:04:29 np0005464214 systemd[1]: libpod-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope: Consumed 1.023s CPU time.
Oct  1 10:04:30 np0005464214 systemd[1]: var-lib-containers-storage-overlay-cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77-merged.mount: Deactivated successfully.
Oct  1 10:04:30 np0005464214 podman[298107]: 2025-10-01 14:04:30.331973652 +0000 UTC m=+1.608305207 container remove 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 10:04:30 np0005464214 systemd[1]: libpod-conmon-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope: Deactivated successfully.
Oct  1 10:04:30 np0005464214 nova_compute[260022]: 2025-10-01 14:04:30.348 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:30 np0005464214 nova_compute[260022]: 2025-10-01 14:04:30.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:04:30 np0005464214 nova_compute[260022]: 2025-10-01 14:04:30.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:04:30 np0005464214 nova_compute[260022]: 2025-10-01 14:04:30.369 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:04:30 np0005464214 nova_compute[260022]: 2025-10-01 14:04:30.370 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:04:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:04:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 79f48b31-3a26-40e4-ba9d-6426c8896d07 does not exist
Oct  1 10:04:30 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4ae6580a-5e29-4e32-b044-32af3bde5940 does not exist
Oct  1 10:04:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:31 np0005464214 nova_compute[260022]: 2025-10-01 14:04:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:31 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:04:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:35.551 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:04:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:35.552 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:04:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:40 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:04:40.554 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:04:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:44 np0005464214 nova_compute[260022]: 2025-10-01 14:04:44.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:04:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:04:47
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'vms']
Oct  1 10:04:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:04:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:04:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:04:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908796319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:04:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:04:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908796319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:04:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:04:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:04:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:04:58 np0005464214 podman[298220]: 2025-10-01 14:04:58.517833776 +0000 UTC m=+0.071566233 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:04:58 np0005464214 podman[298222]: 2025-10-01 14:04:58.528875697 +0000 UTC m=+0.070604493 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 10:04:58 np0005464214 podman[298221]: 2025-10-01 14:04:58.533832544 +0000 UTC m=+0.083380798 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 10:04:58 np0005464214 podman[298219]: 2025-10-01 14:04:58.551522586 +0000 UTC m=+0.104813489 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  1 10:04:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:05:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:12.332 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:05:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:05:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.720 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2 2001:db8::f816:3eff:fe38:38a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:38a8/64', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ebfed82-d7c9-4432-b11e-589de366cfae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=88ea7860-b2d6-4618-814f-53352d1b5566) old=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:05:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.722 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 88ea7860-b2d6-4618-814f-53352d1b5566 in datapath 71bcb114-f0e3-490a-8b09-1cfd544476b4 updated#033[00m
Oct  1 10:05:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.723 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71bcb114-f0e3-490a-8b09-1cfd544476b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:05:13 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.724 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[4529d8bf-9278-41f7-8f65-f761f590c29f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:05:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:15 np0005464214 nova_compute[260022]: 2025-10-01 14:05:15.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:17 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.210 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2 2001:db8:0:1:f816:3eff:fe38:38a8 2001:db8::f816:3eff:fe38:38a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe38:38a8/64 2001:db8::f816:3eff:fe38:38a8/64', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ebfed82-d7c9-4432-b11e-589de366cfae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=88ea7860-b2d6-4618-814f-53352d1b5566) old=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2 2001:db8::f816:3eff:fe38:38a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:38a8/64', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:05:17 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.211 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 88ea7860-b2d6-4618-814f-53352d1b5566 in datapath 71bcb114-f0e3-490a-8b09-1cfd544476b4 updated#033[00m
Oct  1 10:05:17 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.213 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71bcb114-f0e3-490a-8b09-1cfd544476b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:05:17 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.214 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[69ad2ce3-2847-4a4d-b758-a4da7e6a3756]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:05:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:05:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:19 np0005464214 nova_compute[260022]: 2025-10-01 14:05:19.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:21 np0005464214 nova_compute[260022]: 2025-10-01 14:05:21.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:21 np0005464214 nova_compute[260022]: 2025-10-01 14:05:21.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:05:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.370 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.370 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:05:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:05:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082490945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.799 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.979 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.980 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.980 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:05:23 np0005464214 nova_compute[260022]: 2025-10-01 14:05:23.980 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.060 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.073 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.087 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 88d7eb8f-28ed-4ee4-93c1-155f101dcd24 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.087 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.087 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.228 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:05:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:05:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447927317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.679 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.684 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.699 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.700 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:05:24 np0005464214 nova_compute[260022]: 2025-10-01 14:05:24.700 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:05:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:27 np0005464214 nova_compute[260022]: 2025-10-01 14:05:27.700 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:28 np0005464214 nova_compute[260022]: 2025-10-01 14:05:28.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:29 np0005464214 nova_compute[260022]: 2025-10-01 14:05:29.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:29 np0005464214 nova_compute[260022]: 2025-10-01 14:05:29.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:29 np0005464214 nova_compute[260022]: 2025-10-01 14:05:29.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 10:05:29 np0005464214 nova_compute[260022]: 2025-10-01 14:05:29.358 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 10:05:29 np0005464214 podman[298349]: 2025-10-01 14:05:29.530304974 +0000 UTC m=+0.061173714 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 10:05:29 np0005464214 podman[298347]: 2025-10-01 14:05:29.530297894 +0000 UTC m=+0.068246418 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  1 10:05:29 np0005464214 podman[298348]: 2025-10-01 14:05:29.542419588 +0000 UTC m=+0.079941799 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:05:29 np0005464214 podman[298346]: 2025-10-01 14:05:29.551945391 +0000 UTC m=+0.094303555 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller)
Oct  1 10:05:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:31 np0005464214 nova_compute[260022]: 2025-10-01 14:05:31.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:31 np0005464214 nova_compute[260022]: 2025-10-01 14:05:31.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:05:31 np0005464214 nova_compute[260022]: 2025-10-01 14:05:31.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:05:31 np0005464214 nova_compute[260022]: 2025-10-01 14:05:31.374 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:05:31 np0005464214 nova_compute[260022]: 2025-10-01 14:05:31.374 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:05:31 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 550061fc-e857-472f-b8e8-9a7e232aaea2 does not exist
Oct  1 10:05:31 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ce22cbe1-9a1b-43bf-a1a7-b673db0090bd does not exist
Oct  1 10:05:31 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0c13f29e-a5ca-405b-af4f-7f15a43ba6ac does not exist
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:05:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:05:32 np0005464214 nova_compute[260022]: 2025-10-01 14:05:32.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:05:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:05:32 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:05:32 np0005464214 podman[298699]: 2025-10-01 14:05:32.591788382 +0000 UTC m=+0.119955800 container create 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:05:32 np0005464214 podman[298699]: 2025-10-01 14:05:32.499556823 +0000 UTC m=+0.027724231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:05:32 np0005464214 systemd[1]: Started libpod-conmon-37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd.scope.
Oct  1 10:05:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:05:32 np0005464214 podman[298699]: 2025-10-01 14:05:32.747224448 +0000 UTC m=+0.275391936 container init 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 10:05:32 np0005464214 podman[298699]: 2025-10-01 14:05:32.757652989 +0000 UTC m=+0.285820407 container start 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 10:05:32 np0005464214 strange_galois[298716]: 167 167
Oct  1 10:05:32 np0005464214 systemd[1]: libpod-37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd.scope: Deactivated successfully.
Oct  1 10:05:32 np0005464214 podman[298699]: 2025-10-01 14:05:32.796940926 +0000 UTC m=+0.325108404 container attach 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:05:32 np0005464214 podman[298699]: 2025-10-01 14:05:32.798214346 +0000 UTC m=+0.326381734 container died 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:05:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:32 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d625dafbb46cda5464d019e685132f6cc036636bb7e434e2469895fe0ad6bcef-merged.mount: Deactivated successfully.
Oct  1 10:05:33 np0005464214 podman[298699]: 2025-10-01 14:05:33.110718419 +0000 UTC m=+0.638885837 container remove 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:05:33 np0005464214 systemd[1]: libpod-conmon-37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd.scope: Deactivated successfully.
Oct  1 10:05:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:33 np0005464214 podman[298742]: 2025-10-01 14:05:33.416785057 +0000 UTC m=+0.105284974 container create 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 10:05:33 np0005464214 podman[298742]: 2025-10-01 14:05:33.338625585 +0000 UTC m=+0.027125512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:05:33 np0005464214 systemd[1]: Started libpod-conmon-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope.
Oct  1 10:05:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:05:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:33 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:33 np0005464214 podman[298742]: 2025-10-01 14:05:33.599311782 +0000 UTC m=+0.287811699 container init 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 10:05:33 np0005464214 podman[298742]: 2025-10-01 14:05:33.608130602 +0000 UTC m=+0.296630519 container start 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 10:05:33 np0005464214 podman[298742]: 2025-10-01 14:05:33.675531832 +0000 UTC m=+0.364031769 container attach 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:05:34 np0005464214 nervous_euclid[298759]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:05:34 np0005464214 nervous_euclid[298759]: --> relative data size: 1.0
Oct  1 10:05:34 np0005464214 nervous_euclid[298759]: --> All data devices are unavailable
Oct  1 10:05:34 np0005464214 systemd[1]: libpod-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope: Deactivated successfully.
Oct  1 10:05:34 np0005464214 podman[298742]: 2025-10-01 14:05:34.698401391 +0000 UTC m=+1.386901308 container died 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 10:05:34 np0005464214 systemd[1]: libpod-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope: Consumed 1.046s CPU time.
Oct  1 10:05:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045-merged.mount: Deactivated successfully.
Oct  1 10:05:35 np0005464214 podman[298742]: 2025-10-01 14:05:35.294017593 +0000 UTC m=+1.982517510 container remove 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 10:05:35 np0005464214 systemd[1]: libpod-conmon-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope: Deactivated successfully.
Oct  1 10:05:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.093771326 +0000 UTC m=+0.032541244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.217098432 +0000 UTC m=+0.155868300 container create 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:05:36 np0005464214 systemd[1]: Started libpod-conmon-2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34.scope.
Oct  1 10:05:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.46771966 +0000 UTC m=+0.406489578 container init 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.479238685 +0000 UTC m=+0.418008563 container start 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:05:36 np0005464214 condescending_haslett[298957]: 167 167
Oct  1 10:05:36 np0005464214 systemd[1]: libpod-2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34.scope: Deactivated successfully.
Oct  1 10:05:36 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:36.485 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:05:36 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:36.489 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.546532891 +0000 UTC m=+0.485302749 container attach 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.548012599 +0000 UTC m=+0.486782537 container died 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 10:05:36 np0005464214 systemd[1]: var-lib-containers-storage-overlay-55f817f87ddd3a9c8cade059e62cfc5231112825c88a4e1dd9a89c29b347c2c0-merged.mount: Deactivated successfully.
Oct  1 10:05:36 np0005464214 podman[298941]: 2025-10-01 14:05:36.98393075 +0000 UTC m=+0.922700628 container remove 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:05:36 np0005464214 systemd[1]: libpod-conmon-2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34.scope: Deactivated successfully.
Oct  1 10:05:37 np0005464214 podman[298984]: 2025-10-01 14:05:37.263130805 +0000 UTC m=+0.093908952 container create 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  1 10:05:37 np0005464214 podman[298984]: 2025-10-01 14:05:37.207467398 +0000 UTC m=+0.038245595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:05:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:37 np0005464214 systemd[1]: Started libpod-conmon-5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988.scope.
Oct  1 10:05:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:05:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:37 np0005464214 podman[298984]: 2025-10-01 14:05:37.500781952 +0000 UTC m=+0.331560109 container init 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:05:37 np0005464214 podman[298984]: 2025-10-01 14:05:37.521566171 +0000 UTC m=+0.352344328 container start 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 10:05:37 np0005464214 podman[298984]: 2025-10-01 14:05:37.528582204 +0000 UTC m=+0.359360361 container attach 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 10:05:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]: {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:    "0": [
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:        {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "devices": [
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "/dev/loop3"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            ],
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_name": "ceph_lv0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_size": "21470642176",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "name": "ceph_lv0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "tags": {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cluster_name": "ceph",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.crush_device_class": "",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.encrypted": "0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osd_id": "0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.type": "block",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.vdo": "0"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            },
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "type": "block",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "vg_name": "ceph_vg0"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:        }
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:    ],
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:    "1": [
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:        {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "devices": [
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "/dev/loop4"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            ],
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_name": "ceph_lv1",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_size": "21470642176",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "name": "ceph_lv1",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "tags": {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cluster_name": "ceph",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.crush_device_class": "",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.encrypted": "0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osd_id": "1",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.type": "block",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.vdo": "0"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            },
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "type": "block",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "vg_name": "ceph_vg1"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:        }
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:    ],
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:    "2": [
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:        {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "devices": [
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "/dev/loop5"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            ],
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_name": "ceph_lv2",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_size": "21470642176",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "name": "ceph_lv2",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "tags": {
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.cluster_name": "ceph",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.crush_device_class": "",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.encrypted": "0",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osd_id": "2",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.type": "block",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:                "ceph.vdo": "0"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            },
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "type": "block",
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:            "vg_name": "ceph_vg2"
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:        }
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]:    ]
Oct  1 10:05:38 np0005464214 crazy_heyrovsky[299002]: }
Oct  1 10:05:38 np0005464214 systemd[1]: libpod-5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988.scope: Deactivated successfully.
Oct  1 10:05:38 np0005464214 podman[298984]: 2025-10-01 14:05:38.309093697 +0000 UTC m=+1.139871824 container died 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 10:05:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451-merged.mount: Deactivated successfully.
Oct  1 10:05:38 np0005464214 podman[298984]: 2025-10-01 14:05:38.378673086 +0000 UTC m=+1.209451213 container remove 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:05:38 np0005464214 systemd[1]: libpod-conmon-5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988.scope: Deactivated successfully.
Oct  1 10:05:39 np0005464214 podman[299164]: 2025-10-01 14:05:39.163743104 +0000 UTC m=+0.057601631 container create e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:05:39 np0005464214 systemd[1]: Started libpod-conmon-e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607.scope.
Oct  1 10:05:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:05:39 np0005464214 podman[299164]: 2025-10-01 14:05:39.146797646 +0000 UTC m=+0.040656193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:05:39 np0005464214 podman[299164]: 2025-10-01 14:05:39.248584697 +0000 UTC m=+0.142443304 container init e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 10:05:39 np0005464214 podman[299164]: 2025-10-01 14:05:39.260242977 +0000 UTC m=+0.154101544 container start e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:05:39 np0005464214 podman[299164]: 2025-10-01 14:05:39.265022969 +0000 UTC m=+0.158881536 container attach e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:05:39 np0005464214 nervous_goldstine[299181]: 167 167
Oct  1 10:05:39 np0005464214 systemd[1]: libpod-e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607.scope: Deactivated successfully.
Oct  1 10:05:39 np0005464214 podman[299186]: 2025-10-01 14:05:39.316761112 +0000 UTC m=+0.036540131 container died e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:05:39 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9ff8acfa1558267fd6d879d679658c71532a69ac492ae32071388753fc16c4c6-merged.mount: Deactivated successfully.
Oct  1 10:05:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:39 np0005464214 podman[299186]: 2025-10-01 14:05:39.372725179 +0000 UTC m=+0.092504188 container remove e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 10:05:39 np0005464214 systemd[1]: libpod-conmon-e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607.scope: Deactivated successfully.
Oct  1 10:05:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:39.492 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:05:39 np0005464214 podman[299209]: 2025-10-01 14:05:39.642759233 +0000 UTC m=+0.067808394 container create 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 10:05:39 np0005464214 systemd[1]: Started libpod-conmon-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope.
Oct  1 10:05:39 np0005464214 podman[299209]: 2025-10-01 14:05:39.614934039 +0000 UTC m=+0.039983280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:05:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:05:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:05:39 np0005464214 podman[299209]: 2025-10-01 14:05:39.770918503 +0000 UTC m=+0.195967764 container init 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:05:39 np0005464214 podman[299209]: 2025-10-01 14:05:39.783086938 +0000 UTC m=+0.208136099 container start 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:05:39 np0005464214 podman[299209]: 2025-10-01 14:05:39.787423367 +0000 UTC m=+0.212472568 container attach 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]: {
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "osd_id": 0,
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "type": "bluestore"
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:    },
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "osd_id": 2,
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "type": "bluestore"
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:    },
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "osd_id": 1,
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:        "type": "bluestore"
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]:    }
Oct  1 10:05:40 np0005464214 exciting_jackson[299225]: }
Oct  1 10:05:40 np0005464214 systemd[1]: libpod-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope: Deactivated successfully.
Oct  1 10:05:40 np0005464214 podman[299209]: 2025-10-01 14:05:40.84037569 +0000 UTC m=+1.265424931 container died 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:05:40 np0005464214 systemd[1]: libpod-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope: Consumed 1.073s CPU time.
Oct  1 10:05:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734-merged.mount: Deactivated successfully.
Oct  1 10:05:40 np0005464214 podman[299209]: 2025-10-01 14:05:40.900221039 +0000 UTC m=+1.325270210 container remove 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:05:40 np0005464214 systemd[1]: libpod-conmon-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope: Deactivated successfully.
Oct  1 10:05:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:05:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:05:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:05:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:05:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8d37a268-1c5c-4868-801f-2abd3b19e5e7 does not exist
Oct  1 10:05:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7df467f7-f369-469b-936b-4746451d9b35 does not exist
Oct  1 10:05:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:05:41 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:05:42 np0005464214 nova_compute[260022]: 2025-10-01 14:05:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:05:42 np0005464214 nova_compute[260022]: 2025-10-01 14:05:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 10:05:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:05:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:05:47
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'backups', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct  1 10:05:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:05:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:05:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:50 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.815 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2 2001:db8::f816:3eff:fe42:44f3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:44f3/64', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce2640b-c69b-48a5-ac25-0e680aa474d5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e6d43e21-e122-4885-b8fa-19349c7a5738) old=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:05:50 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.817 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e6d43e21-e122-4885-b8fa-19349c7a5738 in datapath 55c091cc-a453-4c16-90a2-45d57ba3ca96 updated#033[00m
Oct  1 10:05:50 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.819 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55c091cc-a453-4c16-90a2-45d57ba3ca96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:05:50 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.820 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[29037719-cf39-47be-9a4e-343792148127]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:05:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:05:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2090342336' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:05:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:05:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2090342336' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:05:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:05:57 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.428 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2 2001:db8:0:1:f816:3eff:fe42:44f3 2001:db8::f816:3eff:fe42:44f3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe42:44f3/64 2001:db8::f816:3eff:fe42:44f3/64', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce2640b-c69b-48a5-ac25-0e680aa474d5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e6d43e21-e122-4885-b8fa-19349c7a5738) old=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2 2001:db8::f816:3eff:fe42:44f3'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:44f3/64', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:05:57 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.430 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e6d43e21-e122-4885-b8fa-19349c7a5738 in datapath 55c091cc-a453-4c16-90a2-45d57ba3ca96 updated#033[00m
Oct  1 10:05:57 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.432 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55c091cc-a453-4c16-90a2-45d57ba3ca96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:05:57 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.433 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[fd419c41-38ac-4a34-aab3-863bd119c8e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:05:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:05:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:05:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:00 np0005464214 podman[299331]: 2025-10-01 14:06:00.529126805 +0000 UTC m=+0.058558111 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  1 10:06:00 np0005464214 podman[299325]: 2025-10-01 14:06:00.538578865 +0000 UTC m=+0.075178419 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:06:00 np0005464214 podman[299324]: 2025-10-01 14:06:00.538953066 +0000 UTC m=+0.079633319 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20250923)
Oct  1 10:06:00 np0005464214 podman[299323]: 2025-10-01 14:06:00.552636922 +0000 UTC m=+0.100991668 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:06:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  1 10:06:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  1 10:06:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  1 10:06:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  1 10:06:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  1 10:06:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:12.332 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:06:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:06:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:06:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  1 10:06:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:15 np0005464214 nova_compute[260022]: 2025-10-01 14:06:15.872 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:06:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:06:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.405 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.406 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.407 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:06:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:06:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513407464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:06:23 np0005464214 nova_compute[260022]: 2025-10-01 14:06:23.868 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.090 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.091 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5062MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.092 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.092 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.297 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.317 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.318 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.318 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.380 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.405 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.406 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.422 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.446 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.488 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:06:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:06:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586880281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.917 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.924 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.965 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.966 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:06:24 np0005464214 nova_compute[260022]: 2025-10-01 14:06:24.966 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:06:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:27 np0005464214 nova_compute[260022]: 2025-10-01 14:06:27.966 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:29 np0005464214 nova_compute[260022]: 2025-10-01 14:06:29.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:30 np0005464214 nova_compute[260022]: 2025-10-01 14:06:30.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:31 np0005464214 podman[299450]: 2025-10-01 14:06:31.551459463 +0000 UTC m=+0.085812137 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true)
Oct  1 10:06:31 np0005464214 podman[299451]: 2025-10-01 14:06:31.565288512 +0000 UTC m=+0.094525222 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:06:31 np0005464214 podman[299448]: 2025-10-01 14:06:31.565592122 +0000 UTC m=+0.109554500 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct  1 10:06:31 np0005464214 podman[299449]: 2025-10-01 14:06:31.576722355 +0000 UTC m=+0.116662846 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd)
Oct  1 10:06:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:33 np0005464214 nova_compute[260022]: 2025-10-01 14:06:33.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:33 np0005464214 nova_compute[260022]: 2025-10-01 14:06:33.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:06:33 np0005464214 nova_compute[260022]: 2025-10-01 14:06:33.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:06:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:33 np0005464214 nova_compute[260022]: 2025-10-01 14:06:33.378 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:06:33 np0005464214 nova_compute[260022]: 2025-10-01 14:06:33.379 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:34 np0005464214 nova_compute[260022]: 2025-10-01 14:06:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.473 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:b7:39 10.100.0.2 2001:db8::f816:3eff:feba:b739'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feba:b739/64', 'neutron:device_id': 'ovnmeta-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=411903f3-2feb-4b6b-97c8-847900bcae09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f15a5db8-1914-4c13-b5ae-3d12d5ed5f17) old=Port_Binding(mac=['fa:16:3e:ba:b7:39 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:06:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.475 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f15a5db8-1914-4c13-b5ae-3d12d5ed5f17 in datapath 6b3c8992-1807-49a1-9a57-c5829337f33a updated#033[00m
Oct  1 10:06:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.477 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b3c8992-1807-49a1-9a57-c5829337f33a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  1 10:06:35 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.478 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[a91c30cc-e38f-49b6-8ceb-fc1079b3ab3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  1 10:06:36 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:36.757 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:06:36 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:36.758 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:06:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:41 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:06:41.761 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:06:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5447307d-0487-4179-a998-aa3fca53fcba does not exist
Oct  1 10:06:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev eed51320-96ad-4143-adf8-5309d262ca95 does not exist
Oct  1 10:06:42 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev fa8d65a1-403a-4135-bb22-db8103926537 does not exist
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.784226475 +0000 UTC m=+0.045387431 container create c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 10:06:42 np0005464214 systemd[1]: Started libpod-conmon-c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961.scope.
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.767193525 +0000 UTC m=+0.028354501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:06:42 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.885332936 +0000 UTC m=+0.146493892 container init c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:06:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.89867211 +0000 UTC m=+0.159833066 container start c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.901603193 +0000 UTC m=+0.162764149 container attach c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  1 10:06:42 np0005464214 nice_beaver[299810]: 167 167
Oct  1 10:06:42 np0005464214 systemd[1]: libpod-c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961.scope: Deactivated successfully.
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.909548735 +0000 UTC m=+0.170709691 container died c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:06:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9917bc67cdfb5b026732375781709f6f6328843cb91de993270ac5d4eebaee68-merged.mount: Deactivated successfully.
Oct  1 10:06:42 np0005464214 podman[299794]: 2025-10-01 14:06:42.95317442 +0000 UTC m=+0.214335376 container remove c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:06:42 np0005464214 systemd[1]: libpod-conmon-c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961.scope: Deactivated successfully.
Oct  1 10:06:43 np0005464214 podman[299836]: 2025-10-01 14:06:43.146418696 +0000 UTC m=+0.054793820 container create 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:06:43 np0005464214 systemd[1]: Started libpod-conmon-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope.
Oct  1 10:06:43 np0005464214 podman[299836]: 2025-10-01 14:06:43.118230551 +0000 UTC m=+0.026605725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:06:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:06:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:43 np0005464214 podman[299836]: 2025-10-01 14:06:43.248305061 +0000 UTC m=+0.156680225 container init 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:06:43 np0005464214 podman[299836]: 2025-10-01 14:06:43.265859118 +0000 UTC m=+0.174234242 container start 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:06:43 np0005464214 podman[299836]: 2025-10-01 14:06:43.271483667 +0000 UTC m=+0.179858831 container attach 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 10:06:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:44 np0005464214 brave_knuth[299852]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:06:44 np0005464214 brave_knuth[299852]: --> relative data size: 1.0
Oct  1 10:06:44 np0005464214 brave_knuth[299852]: --> All data devices are unavailable
Oct  1 10:06:44 np0005464214 systemd[1]: libpod-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope: Deactivated successfully.
Oct  1 10:06:44 np0005464214 podman[299836]: 2025-10-01 14:06:44.421584345 +0000 UTC m=+1.329959489 container died 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:06:44 np0005464214 systemd[1]: libpod-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope: Consumed 1.103s CPU time.
Oct  1 10:06:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15-merged.mount: Deactivated successfully.
Oct  1 10:06:44 np0005464214 podman[299836]: 2025-10-01 14:06:44.498094834 +0000 UTC m=+1.406470118 container remove 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:06:44 np0005464214 systemd[1]: libpod-conmon-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope: Deactivated successfully.
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.24090685 +0000 UTC m=+0.050643689 container create ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:06:45 np0005464214 systemd[1]: Started libpod-conmon-ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba.scope.
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.214812401 +0000 UTC m=+0.024549350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:06:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.338063655 +0000 UTC m=+0.147800594 container init ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.343718644 +0000 UTC m=+0.153455473 container start ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.346913776 +0000 UTC m=+0.156650725 container attach ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:06:45 np0005464214 condescending_lumiere[300051]: 167 167
Oct  1 10:06:45 np0005464214 systemd[1]: libpod-ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba.scope: Deactivated successfully.
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.349194578 +0000 UTC m=+0.158931427 container died ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:06:45 np0005464214 systemd[1]: var-lib-containers-storage-overlay-50ead27a6ce8af2575fff3fafef78fed186e268eed352c62384ef28f74474c94-merged.mount: Deactivated successfully.
Oct  1 10:06:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:45 np0005464214 podman[300034]: 2025-10-01 14:06:45.39552939 +0000 UTC m=+0.205266229 container remove ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 10:06:45 np0005464214 systemd[1]: libpod-conmon-ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba.scope: Deactivated successfully.
Oct  1 10:06:45 np0005464214 podman[300074]: 2025-10-01 14:06:45.580859005 +0000 UTC m=+0.039469734 container create 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 10:06:45 np0005464214 systemd[1]: Started libpod-conmon-4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97.scope.
Oct  1 10:06:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:06:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:45 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:45 np0005464214 podman[300074]: 2025-10-01 14:06:45.565551239 +0000 UTC m=+0.024162008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:06:45 np0005464214 podman[300074]: 2025-10-01 14:06:45.666917867 +0000 UTC m=+0.125528616 container init 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:06:45 np0005464214 podman[300074]: 2025-10-01 14:06:45.675758338 +0000 UTC m=+0.134369077 container start 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 10:06:45 np0005464214 podman[300074]: 2025-10-01 14:06:45.679824787 +0000 UTC m=+0.138435526 container attach 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]: {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:    "0": [
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:        {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "devices": [
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "/dev/loop3"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            ],
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_name": "ceph_lv0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_size": "21470642176",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "name": "ceph_lv0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "tags": {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cluster_name": "ceph",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.crush_device_class": "",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.encrypted": "0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osd_id": "0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.type": "block",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.vdo": "0"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            },
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "type": "block",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "vg_name": "ceph_vg0"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:        }
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:    ],
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:    "1": [
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:        {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "devices": [
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "/dev/loop4"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            ],
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_name": "ceph_lv1",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_size": "21470642176",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "name": "ceph_lv1",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "tags": {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cluster_name": "ceph",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.crush_device_class": "",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.encrypted": "0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osd_id": "1",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.type": "block",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.vdo": "0"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            },
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "type": "block",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "vg_name": "ceph_vg1"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:        }
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:    ],
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:    "2": [
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:        {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "devices": [
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "/dev/loop5"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            ],
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_name": "ceph_lv2",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_size": "21470642176",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "name": "ceph_lv2",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "tags": {
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.cluster_name": "ceph",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.crush_device_class": "",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.encrypted": "0",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osd_id": "2",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.type": "block",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:                "ceph.vdo": "0"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            },
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "type": "block",
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:            "vg_name": "ceph_vg2"
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:        }
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]:    ]
Oct  1 10:06:46 np0005464214 hopeful_kalam[300091]: }
Oct  1 10:06:46 np0005464214 systemd[1]: libpod-4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97.scope: Deactivated successfully.
Oct  1 10:06:46 np0005464214 podman[300100]: 2025-10-01 14:06:46.514079786 +0000 UTC m=+0.043474751 container died 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 10:06:46 np0005464214 systemd[1]: var-lib-containers-storage-overlay-96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b-merged.mount: Deactivated successfully.
Oct  1 10:06:46 np0005464214 podman[300100]: 2025-10-01 14:06:46.579868285 +0000 UTC m=+0.109263190 container remove 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:06:46 np0005464214 systemd[1]: libpod-conmon-4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97.scope: Deactivated successfully.
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.412208283 +0000 UTC m=+0.041960494 container create 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 10:06:47 np0005464214 systemd[1]: Started libpod-conmon-969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040.scope.
Oct  1 10:06:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.39225374 +0000 UTC m=+0.022005981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.504302697 +0000 UTC m=+0.134054918 container init 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.515179993 +0000 UTC m=+0.144932194 container start 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.519612963 +0000 UTC m=+0.149365204 container attach 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:06:47 np0005464214 gracious_chaplygin[300275]: 167 167
Oct  1 10:06:47 np0005464214 systemd[1]: libpod-969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040.scope: Deactivated successfully.
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.523555919 +0000 UTC m=+0.153308160 container died 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 10:06:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay-3e77f7ca88593617ce56341d305eee3a0e01772e6567857a70ccf7973c7c0deb-merged.mount: Deactivated successfully.
Oct  1 10:06:47 np0005464214 podman[300258]: 2025-10-01 14:06:47.571602894 +0000 UTC m=+0.201355135 container remove 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:06:47 np0005464214 systemd[1]: libpod-conmon-969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040.scope: Deactivated successfully.
Oct  1 10:06:47 np0005464214 podman[300299]: 2025-10-01 14:06:47.822224452 +0000 UTC m=+0.062325160 container create 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:06:47 np0005464214 systemd[1]: Started libpod-conmon-65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532.scope.
Oct  1 10:06:47 np0005464214 podman[300299]: 2025-10-01 14:06:47.79666693 +0000 UTC m=+0.036767688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:06:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:47 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:06:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:47 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:06:47
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Oct  1 10:06:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:06:47 np0005464214 podman[300299]: 2025-10-01 14:06:47.921830274 +0000 UTC m=+0.161930962 container init 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 10:06:47 np0005464214 podman[300299]: 2025-10-01 14:06:47.92737757 +0000 UTC m=+0.167478248 container start 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:06:47 np0005464214 podman[300299]: 2025-10-01 14:06:47.930617813 +0000 UTC m=+0.170718511 container attach 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:06:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:06:48 np0005464214 nova_compute[260022]: 2025-10-01 14:06:48.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]: {
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "osd_id": 0,
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "type": "bluestore"
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:    },
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "osd_id": 2,
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "type": "bluestore"
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:    },
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "osd_id": 1,
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:        "type": "bluestore"
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]:    }
Oct  1 10:06:48 np0005464214 infallible_jepsen[300316]: }
Oct  1 10:06:48 np0005464214 systemd[1]: libpod-65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532.scope: Deactivated successfully.
Oct  1 10:06:48 np0005464214 podman[300299]: 2025-10-01 14:06:48.946232061 +0000 UTC m=+1.186332799 container died 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 10:06:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e-merged.mount: Deactivated successfully.
Oct  1 10:06:49 np0005464214 podman[300299]: 2025-10-01 14:06:49.007376173 +0000 UTC m=+1.247476841 container remove 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  1 10:06:49 np0005464214 systemd[1]: libpod-conmon-65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532.scope: Deactivated successfully.
Oct  1 10:06:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:06:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:06:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:06:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:06:49 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e1857238-f6ad-435e-93af-fd7e6175920e does not exist
Oct  1 10:06:49 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ebd20715-e0d2-4214-9826-ba46278ce540 does not exist
Oct  1 10:06:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:06:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:06:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:06:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/753787332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:06:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:06:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/753787332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:06:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:06:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:06:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:06:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:02 np0005464214 podman[300414]: 2025-10-01 14:07:02.513027134 +0000 UTC m=+0.060511812 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:07:02 np0005464214 podman[300413]: 2025-10-01 14:07:02.519165609 +0000 UTC m=+0.068651200 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct  1 10:07:02 np0005464214 podman[300412]: 2025-10-01 14:07:02.523463046 +0000 UTC m=+0.071947636 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:07:02 np0005464214 podman[300411]: 2025-10-01 14:07:02.55035929 +0000 UTC m=+0.098730266 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:07:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.902431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327627902475, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1525, "num_deletes": 255, "total_data_size": 2456067, "memory_usage": 2500184, "flush_reason": "Manual Compaction"}
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327627921297, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2411125, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37172, "largest_seqno": 38696, "table_properties": {"data_size": 2403968, "index_size": 4228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14334, "raw_average_key_size": 19, "raw_value_size": 2389745, "raw_average_value_size": 3260, "num_data_blocks": 189, "num_entries": 733, "num_filter_entries": 733, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327465, "oldest_key_time": 1759327465, "file_creation_time": 1759327627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 18920 microseconds, and 6859 cpu microseconds.
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.921352) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2411125 bytes OK
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.921378) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.922893) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.922915) EVENT_LOG_v1 {"time_micros": 1759327627922908, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.922935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2449409, prev total WAL file size 2449409, number of live WAL files 2.
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.924094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323631' seq:72057594037927935, type:22 .. '6C6F676D0031353132' seq:0, type:0; will stop at (end)
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2354KB)], [83(8923KB)]
Oct  1 10:07:07 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327627924146, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11549148, "oldest_snapshot_seqno": -1}
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6020 keys, 11446675 bytes, temperature: kUnknown
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327628014469, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11446675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11402930, "index_size": 27571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 151817, "raw_average_key_size": 25, "raw_value_size": 11290685, "raw_average_value_size": 1875, "num_data_blocks": 1132, "num_entries": 6020, "num_filter_entries": 6020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.014841) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11446675 bytes
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.016571) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.7 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 6542, records dropped: 522 output_compression: NoCompression
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.016597) EVENT_LOG_v1 {"time_micros": 1759327628016585, "job": 48, "event": "compaction_finished", "compaction_time_micros": 90433, "compaction_time_cpu_micros": 44325, "output_level": 6, "num_output_files": 1, "total_output_size": 11446675, "num_input_records": 6542, "num_output_records": 6020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327628017483, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327628020204, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.923982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:08 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:07:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:07:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:07:12.334 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:07:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:07:12.334 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:07:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:17 np0005464214 nova_compute[260022]: 2025-10-01 14:07:17.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:07:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:07:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.381 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.382 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.382 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.382 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.383 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:07:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:07:23 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420103665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:07:23 np0005464214 nova_compute[260022]: 2025-10-01 14:07:23.810 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.029 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.030 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.030 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.030 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.107 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.121 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.122 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.122 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.170 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:07:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:07:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990516151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.648 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.656 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.671 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.673 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:07:24 np0005464214 nova_compute[260022]: 2025-10-01 14:07:24.674 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:07:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:26 np0005464214 nova_compute[260022]: 2025-10-01 14:07:26.674 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:26 np0005464214 nova_compute[260022]: 2025-10-01 14:07:26.675 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:26 np0005464214 nova_compute[260022]: 2025-10-01 14:07:26.675 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:07:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:30 np0005464214 nova_compute[260022]: 2025-10-01 14:07:30.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:31 np0005464214 nova_compute[260022]: 2025-10-01 14:07:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:33 np0005464214 podman[300539]: 2025-10-01 14:07:33.541866777 +0000 UTC m=+0.090784554 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 10:07:33 np0005464214 podman[300540]: 2025-10-01 14:07:33.564679262 +0000 UTC m=+0.107710662 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  1 10:07:33 np0005464214 podman[300538]: 2025-10-01 14:07:33.570177436 +0000 UTC m=+0.122275713 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:07:33 np0005464214 podman[300541]: 2025-10-01 14:07:33.570345862 +0000 UTC m=+0.104708146 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  1 10:07:34 np0005464214 nova_compute[260022]: 2025-10-01 14:07:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:34 np0005464214 nova_compute[260022]: 2025-10-01 14:07:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:35 np0005464214 nova_compute[260022]: 2025-10-01 14:07:35.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:07:35 np0005464214 nova_compute[260022]: 2025-10-01 14:07:35.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:07:35 np0005464214 nova_compute[260022]: 2025-10-01 14:07:35.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:07:35 np0005464214 nova_compute[260022]: 2025-10-01 14:07:35.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:07:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:37 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:07:37.753 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:07:37 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:07:37.755 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:07:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:38 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:07:38.757 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:07:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:07:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:07:47
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data']
Oct  1 10:07:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:07:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:07:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:07:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c1db3218-c9a5-4fb5-b550-fe746434fd69 does not exist
Oct  1 10:07:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5346077c-4de8-482b-a31b-a4e32446bb5e does not exist
Oct  1 10:07:50 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d4a98774-84d3-4189-ac84-56a421040d62 does not exist
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:07:50 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:07:51 np0005464214 podman[300888]: 2025-10-01 14:07:51.029972981 +0000 UTC m=+0.039577728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:07:51 np0005464214 podman[300888]: 2025-10-01 14:07:51.151505209 +0000 UTC m=+0.161109956 container create 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:07:51 np0005464214 systemd[1]: Started libpod-conmon-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope.
Oct  1 10:07:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:07:51 np0005464214 podman[300888]: 2025-10-01 14:07:51.370413529 +0000 UTC m=+0.380018266 container init 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  1 10:07:51 np0005464214 podman[300888]: 2025-10-01 14:07:51.383204785 +0000 UTC m=+0.392809492 container start 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 10:07:51 np0005464214 systemd[1]: libpod-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope: Deactivated successfully.
Oct  1 10:07:51 np0005464214 peaceful_heisenberg[300905]: 167 167
Oct  1 10:07:51 np0005464214 conmon[300905]: conmon 6abb8d761bc8664a1492 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope/container/memory.events
Oct  1 10:07:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:51 np0005464214 podman[300888]: 2025-10-01 14:07:51.452259256 +0000 UTC m=+0.461863983 container attach 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  1 10:07:51 np0005464214 podman[300888]: 2025-10-01 14:07:51.454109976 +0000 UTC m=+0.463714723 container died 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:07:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4bc401df216f4f41e2b4fcd4aec341e32085fa744f30475eae1fb137dbcf7557-merged.mount: Deactivated successfully.
Oct  1 10:07:52 np0005464214 podman[300888]: 2025-10-01 14:07:52.013309738 +0000 UTC m=+1.022914455 container remove 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 10:07:52 np0005464214 systemd[1]: libpod-conmon-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope: Deactivated successfully.
Oct  1 10:07:52 np0005464214 podman[300929]: 2025-10-01 14:07:52.2584609 +0000 UTC m=+0.059183259 container create 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  1 10:07:52 np0005464214 systemd[1]: Started libpod-conmon-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope.
Oct  1 10:07:52 np0005464214 podman[300929]: 2025-10-01 14:07:52.230189484 +0000 UTC m=+0.030911893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:07:52 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:07:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:52 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:52 np0005464214 podman[300929]: 2025-10-01 14:07:52.368015898 +0000 UTC m=+0.168738227 container init 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct  1 10:07:52 np0005464214 podman[300929]: 2025-10-01 14:07:52.38694567 +0000 UTC m=+0.187668029 container start 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:07:52 np0005464214 podman[300929]: 2025-10-01 14:07:52.426116933 +0000 UTC m=+0.226839272 container attach 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.923011) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327672923038, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 591, "num_deletes": 250, "total_data_size": 634701, "memory_usage": 644736, "flush_reason": "Manual Compaction"}
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327672934464, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 418118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38697, "largest_seqno": 39287, "table_properties": {"data_size": 415324, "index_size": 766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7550, "raw_average_key_size": 20, "raw_value_size": 409540, "raw_average_value_size": 1109, "num_data_blocks": 35, "num_entries": 369, "num_filter_entries": 369, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327628, "oldest_key_time": 1759327628, "file_creation_time": 1759327672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 11505 microseconds, and 2062 cpu microseconds.
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.934511) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 418118 bytes OK
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.934532) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937196) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937213) EVENT_LOG_v1 {"time_micros": 1759327672937207, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 631466, prev total WAL file size 631466, number of live WAL files 2.
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(408KB)], [86(10MB)]
Oct  1 10:07:52 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327672938099, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11864793, "oldest_snapshot_seqno": -1}
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5896 keys, 8787099 bytes, temperature: kUnknown
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327673002310, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 8787099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8748509, "index_size": 22736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149426, "raw_average_key_size": 25, "raw_value_size": 8642713, "raw_average_value_size": 1465, "num_data_blocks": 932, "num_entries": 5896, "num_filter_entries": 5896, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.002574) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 8787099 bytes
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.005367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.5 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(49.4) write-amplify(21.0) OK, records in: 6389, records dropped: 493 output_compression: NoCompression
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.005429) EVENT_LOG_v1 {"time_micros": 1759327673005390, "job": 50, "event": "compaction_finished", "compaction_time_micros": 64295, "compaction_time_cpu_micros": 22058, "output_level": 6, "num_output_files": 1, "total_output_size": 8787099, "num_input_records": 6389, "num_output_records": 5896, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327673005839, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327673007566, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:53 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:07:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:53 np0005464214 silly_chatterjee[300946]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:07:53 np0005464214 silly_chatterjee[300946]: --> relative data size: 1.0
Oct  1 10:07:53 np0005464214 silly_chatterjee[300946]: --> All data devices are unavailable
Oct  1 10:07:53 np0005464214 systemd[1]: libpod-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope: Deactivated successfully.
Oct  1 10:07:53 np0005464214 systemd[1]: libpod-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope: Consumed 1.074s CPU time.
Oct  1 10:07:53 np0005464214 podman[300929]: 2025-10-01 14:07:53.510696404 +0000 UTC m=+1.311418763 container died 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  1 10:07:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c-merged.mount: Deactivated successfully.
Oct  1 10:07:53 np0005464214 podman[300929]: 2025-10-01 14:07:53.765578246 +0000 UTC m=+1.566300605 container remove 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:07:53 np0005464214 systemd[1]: libpod-conmon-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope: Deactivated successfully.
Oct  1 10:07:54 np0005464214 podman[301128]: 2025-10-01 14:07:54.546172676 +0000 UTC m=+0.034028572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:07:54 np0005464214 podman[301128]: 2025-10-01 14:07:54.71483281 +0000 UTC m=+0.202688686 container create e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:07:54 np0005464214 systemd[1]: Started libpod-conmon-e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d.scope.
Oct  1 10:07:54 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:07:54 np0005464214 podman[301128]: 2025-10-01 14:07:54.900909538 +0000 UTC m=+0.388765484 container init e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 10:07:54 np0005464214 podman[301128]: 2025-10-01 14:07:54.912492435 +0000 UTC m=+0.400348311 container start e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 10:07:54 np0005464214 magical_mayer[301144]: 167 167
Oct  1 10:07:54 np0005464214 systemd[1]: libpod-e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d.scope: Deactivated successfully.
Oct  1 10:07:54 np0005464214 podman[301128]: 2025-10-01 14:07:54.943148609 +0000 UTC m=+0.431004495 container attach e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 10:07:54 np0005464214 podman[301128]: 2025-10-01 14:07:54.943974094 +0000 UTC m=+0.431829980 container died e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 10:07:55 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b80d57f83e9a604e03ba176006739446e6fee0a0bb4ec2c6002521f92cd35107-merged.mount: Deactivated successfully.
Oct  1 10:07:55 np0005464214 podman[301128]: 2025-10-01 14:07:55.17095368 +0000 UTC m=+0.658809526 container remove e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:07:55 np0005464214 systemd[1]: libpod-conmon-e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d.scope: Deactivated successfully.
Oct  1 10:07:55 np0005464214 podman[301170]: 2025-10-01 14:07:55.346514773 +0000 UTC m=+0.037369106 container create 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:07:55 np0005464214 systemd[1]: Started libpod-conmon-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope.
Oct  1 10:07:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:55 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:55 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:55 np0005464214 podman[301170]: 2025-10-01 14:07:55.328108989 +0000 UTC m=+0.018963352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:07:55 np0005464214 podman[301170]: 2025-10-01 14:07:55.45757596 +0000 UTC m=+0.148430303 container init 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:07:55 np0005464214 podman[301170]: 2025-10-01 14:07:55.46859801 +0000 UTC m=+0.159452373 container start 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:07:55 np0005464214 podman[301170]: 2025-10-01 14:07:55.475990034 +0000 UTC m=+0.166844387 container attach 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]: {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:    "0": [
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:        {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "devices": [
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "/dev/loop3"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            ],
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_name": "ceph_lv0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_size": "21470642176",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "name": "ceph_lv0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "tags": {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cluster_name": "ceph",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.crush_device_class": "",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.encrypted": "0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osd_id": "0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.type": "block",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.vdo": "0"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            },
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "type": "block",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "vg_name": "ceph_vg0"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:        }
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:    ],
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:    "1": [
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:        {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "devices": [
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "/dev/loop4"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            ],
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_name": "ceph_lv1",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_size": "21470642176",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "name": "ceph_lv1",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "tags": {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cluster_name": "ceph",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.crush_device_class": "",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.encrypted": "0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osd_id": "1",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.type": "block",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.vdo": "0"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            },
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "type": "block",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "vg_name": "ceph_vg1"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:        }
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:    ],
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:    "2": [
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:        {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "devices": [
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "/dev/loop5"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            ],
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_name": "ceph_lv2",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_size": "21470642176",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "name": "ceph_lv2",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "tags": {
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.cluster_name": "ceph",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.crush_device_class": "",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.encrypted": "0",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osd_id": "2",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.type": "block",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:                "ceph.vdo": "0"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            },
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "type": "block",
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:            "vg_name": "ceph_vg2"
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:        }
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]:    ]
Oct  1 10:07:56 np0005464214 confident_dijkstra[301187]: }
Oct  1 10:07:56 np0005464214 systemd[1]: libpod-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope: Deactivated successfully.
Oct  1 10:07:56 np0005464214 conmon[301187]: conmon 564731c42090395f4d72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope/container/memory.events
Oct  1 10:07:56 np0005464214 podman[301170]: 2025-10-01 14:07:56.276214187 +0000 UTC m=+0.967068550 container died 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:07:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e-merged.mount: Deactivated successfully.
Oct  1 10:07:56 np0005464214 podman[301170]: 2025-10-01 14:07:56.344227047 +0000 UTC m=+1.035081380 container remove 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:07:56 np0005464214 systemd[1]: libpod-conmon-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope: Deactivated successfully.
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.088503194 +0000 UTC m=+0.031798039 container create f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 10:07:57 np0005464214 systemd[1]: Started libpod-conmon-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope.
Oct  1 10:07:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.15763699 +0000 UTC m=+0.100931905 container init f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.164493167 +0000 UTC m=+0.107788012 container start f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.168019569 +0000 UTC m=+0.111314414 container attach f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 10:07:57 np0005464214 stupefied_gould[301363]: 167 167
Oct  1 10:07:57 np0005464214 systemd[1]: libpod-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope: Deactivated successfully.
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.075242034 +0000 UTC m=+0.018536899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:07:57 np0005464214 conmon[301363]: conmon f2a8d08e5e8c4065890b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope/container/memory.events
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.172683827 +0000 UTC m=+0.115978672 container died f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:07:57 np0005464214 systemd[1]: var-lib-containers-storage-overlay-83b9d00628243f06cbcd1a3dfa557858e2e02e9549d34f8b55f794803bb70106-merged.mount: Deactivated successfully.
Oct  1 10:07:57 np0005464214 podman[301347]: 2025-10-01 14:07:57.21184216 +0000 UTC m=+0.155137005 container remove f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 10:07:57 np0005464214 systemd[1]: libpod-conmon-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope: Deactivated successfully.
Oct  1 10:07:57 np0005464214 podman[301386]: 2025-10-01 14:07:57.380849826 +0000 UTC m=+0.050586457 container create a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:57 np0005464214 systemd[1]: Started libpod-conmon-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope.
Oct  1 10:07:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:07:57 np0005464214 podman[301386]: 2025-10-01 14:07:57.357338159 +0000 UTC m=+0.027074870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:07:57 np0005464214 podman[301386]: 2025-10-01 14:07:57.475921744 +0000 UTC m=+0.145658425 container init a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:07:57 np0005464214 podman[301386]: 2025-10-01 14:07:57.487280894 +0000 UTC m=+0.157017565 container start a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:07:57 np0005464214 podman[301386]: 2025-10-01 14:07:57.490615331 +0000 UTC m=+0.160352012 container attach a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:07:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:07:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]: {
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "osd_id": 0,
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "type": "bluestore"
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:    },
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "osd_id": 2,
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "type": "bluestore"
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:    },
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "osd_id": 1,
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:        "type": "bluestore"
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]:    }
Oct  1 10:07:58 np0005464214 vigorous_bell[301402]: }
Oct  1 10:07:58 np0005464214 systemd[1]: libpod-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope: Deactivated successfully.
Oct  1 10:07:58 np0005464214 systemd[1]: libpod-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope: Consumed 1.066s CPU time.
Oct  1 10:07:58 np0005464214 podman[301435]: 2025-10-01 14:07:58.591927452 +0000 UTC m=+0.033320179 container died a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:07:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe-merged.mount: Deactivated successfully.
Oct  1 10:07:58 np0005464214 podman[301435]: 2025-10-01 14:07:58.654152068 +0000 UTC m=+0.095544785 container remove a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:07:58 np0005464214 systemd[1]: libpod-conmon-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope: Deactivated successfully.
Oct  1 10:07:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:07:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:07:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:07:58 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:07:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev be778427-6e6f-4ef1-8e98-b2fb8dafd97c does not exist
Oct  1 10:07:58 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 6a9ca557-7d73-4090-9880-54ce7ba70e5e does not exist
Oct  1 10:07:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:07:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:07:59 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:08:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8675 writes, 39K keys, 8675 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8675 writes, 8675 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1316 writes, 6220 keys, 1316 commit groups, 1.0 writes per commit group, ingest: 8.59 MB, 0.01 MB/s#012Interval WAL: 1316 writes, 1316 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.5      2.60              0.19        25    0.104       0      0       0.0       0.0#012  L6      1/0    8.38 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     55.4     45.8      4.11              0.72        24    0.171    125K    13K       0.0       0.0#012 Sum      1/0    8.38 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     34.0     35.2      6.70              0.92        49    0.137    125K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9     69.6     69.1      0.92              0.26        12    0.076     37K   3074       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     55.4     45.8      4.11              0.72        24    0.171    125K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.5      2.58              0.19        24    0.108       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.22 GB read, 0.06 MB/s read, 6.7 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 25.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000153 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1644,24.58 MB,8.087%) FilterBlock(50,355.05 KB,0.114054%) IndexBlock(50,627.59 KB,0.201607%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 10:08:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.924100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682924131, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 251, "total_data_size": 198781, "memory_usage": 205576, "flush_reason": "Manual Compaction"}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682927130, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 198538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39288, "largest_seqno": 39631, "table_properties": {"data_size": 196314, "index_size": 388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4411, "raw_average_key_size": 14, "raw_value_size": 192005, "raw_average_value_size": 650, "num_data_blocks": 16, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327673, "oldest_key_time": 1759327673, "file_creation_time": 1759327682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 3087 microseconds, and 1476 cpu microseconds.
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.927184) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 198538 bytes OK
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.927205) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929012) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929035) EVENT_LOG_v1 {"time_micros": 1759327682929027, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 196434, prev total WAL file size 196434, number of live WAL files 2.
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(193KB)], [89(8581KB)]
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682929767, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 8985637, "oldest_snapshot_seqno": -1}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5678 keys, 8270879 bytes, temperature: kUnknown
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682986939, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 8270879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8233842, "index_size": 21759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 146688, "raw_average_key_size": 25, "raw_value_size": 8131819, "raw_average_value_size": 1432, "num_data_blocks": 872, "num_entries": 5678, "num_filter_entries": 5678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.987172) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8270879 bytes
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.988351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.2 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.4 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(86.9) write-amplify(41.7) OK, records in: 6191, records dropped: 513 output_compression: NoCompression
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.988370) EVENT_LOG_v1 {"time_micros": 1759327682988361, "job": 52, "event": "compaction_finished", "compaction_time_micros": 57173, "compaction_time_cpu_micros": 38048, "output_level": 6, "num_output_files": 1, "total_output_size": 8270879, "num_input_records": 6191, "num_output_records": 5678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682988523, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682990271, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:02 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:04 np0005464214 podman[301501]: 2025-10-01 14:08:04.538460552 +0000 UTC m=+0.080079063 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:08:04 np0005464214 podman[301503]: 2025-10-01 14:08:04.538655228 +0000 UTC m=+0.069407464 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 10:08:04 np0005464214 podman[301502]: 2025-10-01 14:08:04.541648903 +0000 UTC m=+0.075267310 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid)
Oct  1 10:08:04 np0005464214 podman[301500]: 2025-10-01 14:08:04.574050161 +0000 UTC m=+0.115232278 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct  1 10:08:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:08:12.334 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:08:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:08:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:08:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:08:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:08:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.559646) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693559682, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 336, "num_deletes": 251, "total_data_size": 179349, "memory_usage": 187000, "flush_reason": "Manual Compaction"}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693563186, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 177958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39632, "largest_seqno": 39967, "table_properties": {"data_size": 175830, "index_size": 292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5313, "raw_average_key_size": 18, "raw_value_size": 171719, "raw_average_value_size": 596, "num_data_blocks": 13, "num_entries": 288, "num_filter_entries": 288, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327683, "oldest_key_time": 1759327683, "file_creation_time": 1759327693, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 3570 microseconds, and 993 cpu microseconds.
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.563220) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 177958 bytes OK
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.563240) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565190) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565202) EVENT_LOG_v1 {"time_micros": 1759327693565199, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 177034, prev total WAL file size 177034, number of live WAL files 2.
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(173KB)], [92(8077KB)]
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693565719, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8448837, "oldest_snapshot_seqno": -1}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5457 keys, 6715079 bytes, temperature: kUnknown
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693610535, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 6715079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6680996, "index_size": 19317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 142782, "raw_average_key_size": 26, "raw_value_size": 6584263, "raw_average_value_size": 1206, "num_data_blocks": 759, "num_entries": 5457, "num_filter_entries": 5457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327693, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.610928) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 6715079 bytes
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.612558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.1 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.9 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(85.2) write-amplify(37.7) OK, records in: 5966, records dropped: 509 output_compression: NoCompression
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.612584) EVENT_LOG_v1 {"time_micros": 1759327693612572, "job": 54, "event": "compaction_finished", "compaction_time_micros": 44926, "compaction_time_cpu_micros": 19105, "output_level": 6, "num_output_files": 1, "total_output_size": 6715079, "num_input_records": 5966, "num_output_records": 5457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693612972, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693615091, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:08:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct  1 10:08:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct  1 10:08:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct  1 10:08:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:17 np0005464214 nova_compute[260022]: 2025-10-01 14:08:17.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:08:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:08:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:08:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:08:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct  1 10:08:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct  1 10:08:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct  1 10:08:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.379 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.379 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:08:24 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:08:24 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071918506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:08:24 np0005464214 nova_compute[260022]: 2025-10-01 14:08:24.854 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.019 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.021 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.021 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.022 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.109 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.126 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.127 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.127 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.180 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:08:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:08:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:08:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608409573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.579 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.584 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.608 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.610 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:08:25 np0005464214 nova_compute[260022]: 2025-10-01 14:08:25.610 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:08:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 818 B/s wr, 6 op/s
Oct  1 10:08:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:28 np0005464214 nova_compute[260022]: 2025-10-01 14:08:28.611 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:28 np0005464214 nova_compute[260022]: 2025-10-01 14:08:28.612 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:28 np0005464214 nova_compute[260022]: 2025-10-01 14:08:28.612 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:08:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:30 np0005464214 nova_compute[260022]: 2025-10-01 14:08:30.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:33 np0005464214 nova_compute[260022]: 2025-10-01 14:08:33.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:34 np0005464214 nova_compute[260022]: 2025-10-01 14:08:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:35 np0005464214 podman[301627]: 2025-10-01 14:08:35.53480088 +0000 UTC m=+0.081764327 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 10:08:35 np0005464214 podman[301628]: 2025-10-01 14:08:35.535055258 +0000 UTC m=+0.077378077 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:08:35 np0005464214 podman[301626]: 2025-10-01 14:08:35.538444776 +0000 UTC m=+0.088199162 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct  1 10:08:35 np0005464214 podman[301625]: 2025-10-01 14:08:35.578970482 +0000 UTC m=+0.127722136 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct  1 10:08:36 np0005464214 nova_compute[260022]: 2025-10-01 14:08:36.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:36 np0005464214 nova_compute[260022]: 2025-10-01 14:08:36.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:08:36 np0005464214 nova_compute[260022]: 2025-10-01 14:08:36.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:08:36 np0005464214 nova_compute[260022]: 2025-10-01 14:08:36.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:08:36 np0005464214 nova_compute[260022]: 2025-10-01 14:08:36.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:08:47
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'vms', 'backups']
Oct  1 10:08:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:08:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:08:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:08:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:53 np0005464214 nova_compute[260022]: 2025-10-01 14:08:53.356 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:08:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:08:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433003486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:08:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:08:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433003486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:08:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:08:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:08:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:08:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:08:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f03d1003-6c34-4fcc-aeb8-03f77958a6dc does not exist
Oct  1 10:08:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a222cb74-fdd4-4886-8a37-7010d046a576 does not exist
Oct  1 10:08:59 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 52999477-a769-40fd-bd40-f4ea37737ed7 does not exist
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:08:59 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:09:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:09:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:09:00 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.64108262 +0000 UTC m=+0.073617848 container create 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  1 10:09:00 np0005464214 systemd[1]: Started libpod-conmon-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope.
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.613485394 +0000 UTC m=+0.046020672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:09:00 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.755311727 +0000 UTC m=+0.187846965 container init 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.768518946 +0000 UTC m=+0.201054164 container start 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.773431292 +0000 UTC m=+0.205966500 container attach 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:09:00 np0005464214 angry_almeida[301996]: 167 167
Oct  1 10:09:00 np0005464214 systemd[1]: libpod-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope: Deactivated successfully.
Oct  1 10:09:00 np0005464214 conmon[301996]: conmon 1ac31d5edab883412e72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope/container/memory.events
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.777713628 +0000 UTC m=+0.210248826 container died 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  1 10:09:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-79d046bd287249e3accf87c6d3b80967ce5abf3f30a9360863a75c23df7d67db-merged.mount: Deactivated successfully.
Oct  1 10:09:00 np0005464214 podman[301980]: 2025-10-01 14:09:00.832831477 +0000 UTC m=+0.265366675 container remove 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:09:00 np0005464214 systemd[1]: libpod-conmon-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope: Deactivated successfully.
Oct  1 10:09:01 np0005464214 podman[302020]: 2025-10-01 14:09:01.042814394 +0000 UTC m=+0.062293289 container create 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:09:01 np0005464214 systemd[1]: Started libpod-conmon-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope.
Oct  1 10:09:01 np0005464214 podman[302020]: 2025-10-01 14:09:01.019683909 +0000 UTC m=+0.039162824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:09:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:09:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:01 np0005464214 podman[302020]: 2025-10-01 14:09:01.155670177 +0000 UTC m=+0.175149102 container init 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:09:01 np0005464214 podman[302020]: 2025-10-01 14:09:01.167263574 +0000 UTC m=+0.186742439 container start 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 10:09:01 np0005464214 podman[302020]: 2025-10-01 14:09:01.171559621 +0000 UTC m=+0.191038486 container attach 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 10:09:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:02 np0005464214 pedantic_goodall[302036]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:09:02 np0005464214 pedantic_goodall[302036]: --> relative data size: 1.0
Oct  1 10:09:02 np0005464214 pedantic_goodall[302036]: --> All data devices are unavailable
Oct  1 10:09:02 np0005464214 systemd[1]: libpod-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope: Deactivated successfully.
Oct  1 10:09:02 np0005464214 systemd[1]: libpod-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope: Consumed 1.117s CPU time.
Oct  1 10:09:02 np0005464214 podman[302020]: 2025-10-01 14:09:02.329973265 +0000 UTC m=+1.349452170 container died 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  1 10:09:02 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131-merged.mount: Deactivated successfully.
Oct  1 10:09:02 np0005464214 podman[302020]: 2025-10-01 14:09:02.396456346 +0000 UTC m=+1.415935211 container remove 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:09:02 np0005464214 systemd[1]: libpod-conmon-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope: Deactivated successfully.
Oct  1 10:09:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.161199043 +0000 UTC m=+0.053188749 container create 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:09:03 np0005464214 systemd[1]: Started libpod-conmon-0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5.scope.
Oct  1 10:09:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.222981445 +0000 UTC m=+0.114971181 container init 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.137994036 +0000 UTC m=+0.029983802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.23479611 +0000 UTC m=+0.126785826 container start 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.238283271 +0000 UTC m=+0.130273007 container attach 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 10:09:03 np0005464214 competent_lederberg[302238]: 167 167
Oct  1 10:09:03 np0005464214 systemd[1]: libpod-0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5.scope: Deactivated successfully.
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.242143303 +0000 UTC m=+0.134133029 container died 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 10:09:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b2b2b417f1d6b31b7ab7c7de45fddc832d841d0705e011ef864cee7b431bb9fd-merged.mount: Deactivated successfully.
Oct  1 10:09:03 np0005464214 podman[302221]: 2025-10-01 14:09:03.28678161 +0000 UTC m=+0.178771356 container remove 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:09:03 np0005464214 systemd[1]: libpod-conmon-0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5.scope: Deactivated successfully.
Oct  1 10:09:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:03 np0005464214 podman[302260]: 2025-10-01 14:09:03.474368185 +0000 UTC m=+0.055387389 container create cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 10:09:03 np0005464214 systemd[1]: Started libpod-conmon-cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3.scope.
Oct  1 10:09:03 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:09:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:03 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:03 np0005464214 podman[302260]: 2025-10-01 14:09:03.452968046 +0000 UTC m=+0.033987280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:09:03 np0005464214 podman[302260]: 2025-10-01 14:09:03.554670515 +0000 UTC m=+0.135689799 container init cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:09:03 np0005464214 podman[302260]: 2025-10-01 14:09:03.561302745 +0000 UTC m=+0.142321979 container start cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:09:03 np0005464214 podman[302260]: 2025-10-01 14:09:03.566133409 +0000 UTC m=+0.147152643 container attach cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]: {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:    "0": [
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:        {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "devices": [
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "/dev/loop3"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            ],
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_name": "ceph_lv0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_size": "21470642176",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "name": "ceph_lv0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "tags": {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cluster_name": "ceph",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.crush_device_class": "",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.encrypted": "0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osd_id": "0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.type": "block",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.vdo": "0"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            },
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "type": "block",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "vg_name": "ceph_vg0"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:        }
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:    ],
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:    "1": [
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:        {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "devices": [
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "/dev/loop4"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            ],
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_name": "ceph_lv1",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_size": "21470642176",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "name": "ceph_lv1",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "tags": {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cluster_name": "ceph",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.crush_device_class": "",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.encrypted": "0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osd_id": "1",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.type": "block",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.vdo": "0"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            },
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "type": "block",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "vg_name": "ceph_vg1"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:        }
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:    ],
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:    "2": [
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:        {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "devices": [
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "/dev/loop5"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            ],
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_name": "ceph_lv2",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_size": "21470642176",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "name": "ceph_lv2",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "tags": {
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.cluster_name": "ceph",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.crush_device_class": "",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.encrypted": "0",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osd_id": "2",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.type": "block",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:                "ceph.vdo": "0"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            },
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "type": "block",
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:            "vg_name": "ceph_vg2"
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:        }
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]:    ]
Oct  1 10:09:04 np0005464214 nostalgic_black[302276]: }
Oct  1 10:09:04 np0005464214 systemd[1]: libpod-cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3.scope: Deactivated successfully.
Oct  1 10:09:04 np0005464214 podman[302260]: 2025-10-01 14:09:04.280339921 +0000 UTC m=+0.861359155 container died cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:09:04 np0005464214 systemd[1]: var-lib-containers-storage-overlay-49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd-merged.mount: Deactivated successfully.
Oct  1 10:09:04 np0005464214 podman[302260]: 2025-10-01 14:09:04.351561202 +0000 UTC m=+0.932580396 container remove cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:09:04 np0005464214 systemd[1]: libpod-conmon-cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3.scope: Deactivated successfully.
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.092466573 +0000 UTC m=+0.037134579 container create 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 10:09:05 np0005464214 systemd[1]: Started libpod-conmon-201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65.scope.
Oct  1 10:09:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.076524927 +0000 UTC m=+0.021192953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.182970946 +0000 UTC m=+0.127639022 container init 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.189210595 +0000 UTC m=+0.133878611 container start 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.192790698 +0000 UTC m=+0.137458744 container attach 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:09:05 np0005464214 great_blackwell[302457]: 167 167
Oct  1 10:09:05 np0005464214 systemd[1]: libpod-201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65.scope: Deactivated successfully.
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.195009089 +0000 UTC m=+0.139677145 container died 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 10:09:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5abb64f56e6365c10dc0af0db4be52a64c0406fb538610e84a67b73c5ad8ea95-merged.mount: Deactivated successfully.
Oct  1 10:09:05 np0005464214 podman[302441]: 2025-10-01 14:09:05.252480533 +0000 UTC m=+0.197148539 container remove 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 10:09:05 np0005464214 systemd[1]: libpod-conmon-201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65.scope: Deactivated successfully.
Oct  1 10:09:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:05 np0005464214 podman[302483]: 2025-10-01 14:09:05.505472345 +0000 UTC m=+0.065442429 container create 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 10:09:05 np0005464214 systemd[1]: Started libpod-conmon-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope.
Oct  1 10:09:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:09:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:09:05 np0005464214 podman[302483]: 2025-10-01 14:09:05.482846106 +0000 UTC m=+0.042816200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:09:05 np0005464214 podman[302483]: 2025-10-01 14:09:05.658181762 +0000 UTC m=+0.218151886 container init 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 10:09:05 np0005464214 podman[302483]: 2025-10-01 14:09:05.664830123 +0000 UTC m=+0.224800207 container start 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:09:05 np0005464214 podman[302483]: 2025-10-01 14:09:05.717220826 +0000 UTC m=+0.277190890 container attach 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:09:05 np0005464214 podman[302502]: 2025-10-01 14:09:05.763470145 +0000 UTC m=+0.189872528 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct  1 10:09:05 np0005464214 podman[302505]: 2025-10-01 14:09:05.790487082 +0000 UTC m=+0.218542038 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:09:05 np0005464214 podman[302503]: 2025-10-01 14:09:05.819446822 +0000 UTC m=+0.246229188 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  1 10:09:05 np0005464214 podman[302553]: 2025-10-01 14:09:05.865435821 +0000 UTC m=+0.076716565 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]: {
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "osd_id": 0,
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "type": "bluestore"
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:    },
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "osd_id": 2,
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "type": "bluestore"
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:    },
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "osd_id": 1,
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:        "type": "bluestore"
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]:    }
Oct  1 10:09:06 np0005464214 peaceful_mclaren[302500]: }
Oct  1 10:09:06 np0005464214 systemd[1]: libpod-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope: Deactivated successfully.
Oct  1 10:09:06 np0005464214 systemd[1]: libpod-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope: Consumed 1.115s CPU time.
Oct  1 10:09:06 np0005464214 podman[302483]: 2025-10-01 14:09:06.775207053 +0000 UTC m=+1.335177147 container died 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 10:09:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9-merged.mount: Deactivated successfully.
Oct  1 10:09:06 np0005464214 podman[302483]: 2025-10-01 14:09:06.844790942 +0000 UTC m=+1.404761006 container remove 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:09:06 np0005464214 systemd[1]: libpod-conmon-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope: Deactivated successfully.
Oct  1 10:09:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:09:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:09:06 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:09:06 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:09:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7c9017b2-66d3-4653-9531-2ce1cfdbd7fa does not exist
Oct  1 10:09:06 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f44a7d01-8215-404b-aa42-16f9a097890a does not exist
Oct  1 10:09:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:09:07 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:09:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:09:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:09:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:09:12.336 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:09:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:09:12.336 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:09:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:09:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:09:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:18 np0005464214 nova_compute[260022]: 2025-10-01 14:09:18.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:23 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 10:09:23 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:09:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:25 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:09:25 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187336416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:09:25 np0005464214 nova_compute[260022]: 2025-10-01 14:09:25.831 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.082 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.084 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.084 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.085 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.171 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.187 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.188 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.188 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.242 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:09:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:09:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831139878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.731 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.739 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.757 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.761 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:09:26 np0005464214 nova_compute[260022]: 2025-10-01 14:09:26.761 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:09:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:28 np0005464214 nova_compute[260022]: 2025-10-01 14:09:28.763 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:28 np0005464214 nova_compute[260022]: 2025-10-01 14:09:28.763 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:28 np0005464214 nova_compute[260022]: 2025-10-01 14:09:28.763 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:09:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:30 np0005464214 nova_compute[260022]: 2025-10-01 14:09:30.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:34 np0005464214 nova_compute[260022]: 2025-10-01 14:09:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:36 np0005464214 nova_compute[260022]: 2025-10-01 14:09:36.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:36 np0005464214 nova_compute[260022]: 2025-10-01 14:09:36.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:09:36 np0005464214 nova_compute[260022]: 2025-10-01 14:09:36.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:09:36 np0005464214 nova_compute[260022]: 2025-10-01 14:09:36.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:09:36 np0005464214 nova_compute[260022]: 2025-10-01 14:09:36.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:36 np0005464214 podman[302723]: 2025-10-01 14:09:36.580529594 +0000 UTC m=+0.117876194 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid)
Oct  1 10:09:36 np0005464214 podman[302722]: 2025-10-01 14:09:36.593839016 +0000 UTC m=+0.131163705 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 10:09:36 np0005464214 podman[302724]: 2025-10-01 14:09:36.594246669 +0000 UTC m=+0.119444433 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:09:36 np0005464214 podman[302721]: 2025-10-01 14:09:36.599404973 +0000 UTC m=+0.137509247 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 10:09:37 np0005464214 nova_compute[260022]: 2025-10-01 14:09:37.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:09:47
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Oct  1 10:09:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:09:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:09:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:09:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.346 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.347 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.347 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.348 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.348 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.349 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:09:53 np0005464214 nova_compute[260022]: 2025-10-01 14:09:53.365 2 DEBUG nova.virt.libvirt.imagecache [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Skipping verification, no base directory at /var/lib/nova/instances/_base _get_base /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:367#033[00m
Oct  1 10:09:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:09:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643843576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:09:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:09:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643843576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:09:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:09:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 7978 writes, 29K keys, 7978 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7978 writes, 1972 syncs, 4.05 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 251 writes, 437 keys, 251 commit groups, 1.0 writes per commit group, ingest: 0.18 MB, 0.00 MB/s#012Interval WAL: 251 writes, 121 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:09:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:09:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:09:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:10:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9426 writes, 34K keys, 9426 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9426 writes, 2411 syncs, 3.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 270 writes, 502 keys, 270 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s#012Interval WAL: 270 writes, 127 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:10:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:10:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 8417 writes, 30K keys, 8417 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8417 writes, 2145 syncs, 3.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 249 writes, 454 keys, 249 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s#012Interval WAL: 249 writes, 117 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:10:07 np0005464214 podman[302827]: 2025-10-01 14:10:07.2950183 +0000 UTC m=+0.088874982 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 10:10:07 np0005464214 podman[302828]: 2025-10-01 14:10:07.295145984 +0000 UTC m=+0.084079460 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:10:07 np0005464214 podman[302829]: 2025-10-01 14:10:07.316769321 +0000 UTC m=+0.092653393 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  1 10:10:07 np0005464214 podman[302826]: 2025-10-01 14:10:07.330581259 +0000 UTC m=+0.127181019 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:10:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 op/s
Oct  1 10:10:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 10:10:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 839d9d56-2f06-475c-903d-b4cc0fe280b4 does not exist
Oct  1 10:10:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 7eead415-c00a-466e-8282-fa052eb3f117 does not exist
Oct  1 10:10:08 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5018edee-6188-402b-bedc-02431d6e186e does not exist
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:10:08 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.478380323 +0000 UTC m=+0.059435288 container create e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  1 10:10:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 10:10:09 np0005464214 systemd[1]: Started libpod-conmon-e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08.scope.
Oct  1 10:10:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.458273654 +0000 UTC m=+0.039328669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.568099171 +0000 UTC m=+0.149154236 container init e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.579340528 +0000 UTC m=+0.160395523 container start e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.583680545 +0000 UTC m=+0.164735540 container attach e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:10:09 np0005464214 kind_wozniak[303287]: 167 167
Oct  1 10:10:09 np0005464214 systemd[1]: libpod-e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08.scope: Deactivated successfully.
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.590391939 +0000 UTC m=+0.171446964 container died e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  1 10:10:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0bc136d1bb90e4739a013513014a0a7b9fa84b274008dbbaf6edce3524b1260c-merged.mount: Deactivated successfully.
Oct  1 10:10:09 np0005464214 podman[303271]: 2025-10-01 14:10:09.638278389 +0000 UTC m=+0.219333384 container remove e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 10:10:09 np0005464214 systemd[1]: libpod-conmon-e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08.scope: Deactivated successfully.
Oct  1 10:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:09 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:10:09 np0005464214 podman[303311]: 2025-10-01 14:10:09.892228951 +0000 UTC m=+0.076785149 container create a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 10:10:09 np0005464214 systemd[1]: Started libpod-conmon-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope.
Oct  1 10:10:09 np0005464214 podman[303311]: 2025-10-01 14:10:09.864699987 +0000 UTC m=+0.049256265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:10:09 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:09 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:10 np0005464214 podman[303311]: 2025-10-01 14:10:10.001994606 +0000 UTC m=+0.186550824 container init a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:10:10 np0005464214 podman[303311]: 2025-10-01 14:10:10.016437784 +0000 UTC m=+0.200993982 container start a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 10:10:10 np0005464214 podman[303311]: 2025-10-01 14:10:10.020298746 +0000 UTC m=+0.204854944 container attach a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  1 10:10:11 np0005464214 interesting_edison[303328]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:10:11 np0005464214 interesting_edison[303328]: --> relative data size: 1.0
Oct  1 10:10:11 np0005464214 interesting_edison[303328]: --> All data devices are unavailable
Oct  1 10:10:11 np0005464214 systemd[1]: libpod-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope: Deactivated successfully.
Oct  1 10:10:11 np0005464214 systemd[1]: libpod-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope: Consumed 1.104s CPU time.
Oct  1 10:10:11 np0005464214 podman[303311]: 2025-10-01 14:10:11.164673326 +0000 UTC m=+1.349229534 container died a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:10:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6-merged.mount: Deactivated successfully.
Oct  1 10:10:11 np0005464214 podman[303311]: 2025-10-01 14:10:11.233963706 +0000 UTC m=+1.418519944 container remove a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:10:11 np0005464214 systemd[1]: libpod-conmon-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope: Deactivated successfully.
Oct  1 10:10:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.106650489 +0000 UTC m=+0.060229842 container create 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:10:12 np0005464214 systemd[1]: Started libpod-conmon-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope.
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.077270657 +0000 UTC m=+0.030850080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:10:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.207321215 +0000 UTC m=+0.160900558 container init 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.219980488 +0000 UTC m=+0.173559811 container start 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.224529292 +0000 UTC m=+0.178108725 container attach 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:10:12 np0005464214 gallant_maxwell[303527]: 167 167
Oct  1 10:10:12 np0005464214 systemd[1]: libpod-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope: Deactivated successfully.
Oct  1 10:10:12 np0005464214 conmon[303527]: conmon 7d9ca906fd50bc299039 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope/container/memory.events
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.228583771 +0000 UTC m=+0.182163144 container died 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:10:12 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d87d50817063913dc50296ee2b6c53b07e516183e743e7b13606ff01f4d7cae9-merged.mount: Deactivated successfully.
Oct  1 10:10:12 np0005464214 podman[303511]: 2025-10-01 14:10:12.288150501 +0000 UTC m=+0.241729824 container remove 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:10:12 np0005464214 systemd[1]: libpod-conmon-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope: Deactivated successfully.
Oct  1 10:10:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:10:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:10:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:10:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:10:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:10:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:10:12 np0005464214 podman[303553]: 2025-10-01 14:10:12.492568811 +0000 UTC m=+0.058182268 container create fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:10:12 np0005464214 systemd[1]: Started libpod-conmon-fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320.scope.
Oct  1 10:10:12 np0005464214 podman[303553]: 2025-10-01 14:10:12.473774254 +0000 UTC m=+0.039387731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:10:12 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:12 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:12 np0005464214 podman[303553]: 2025-10-01 14:10:12.608843492 +0000 UTC m=+0.174456939 container init fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 10:10:12 np0005464214 podman[303553]: 2025-10-01 14:10:12.617308951 +0000 UTC m=+0.182922398 container start fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:10:12 np0005464214 podman[303553]: 2025-10-01 14:10:12.62043245 +0000 UTC m=+0.186045897 container attach fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:10:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]: {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:    "0": [
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:        {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "devices": [
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "/dev/loop3"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            ],
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_name": "ceph_lv0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_size": "21470642176",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "name": "ceph_lv0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "tags": {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cluster_name": "ceph",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.crush_device_class": "",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.encrypted": "0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osd_id": "0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.type": "block",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.vdo": "0"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            },
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "type": "block",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "vg_name": "ceph_vg0"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:        }
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:    ],
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:    "1": [
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:        {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "devices": [
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "/dev/loop4"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            ],
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_name": "ceph_lv1",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_size": "21470642176",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "name": "ceph_lv1",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "tags": {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cluster_name": "ceph",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.crush_device_class": "",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.encrypted": "0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osd_id": "1",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.type": "block",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.vdo": "0"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            },
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "type": "block",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "vg_name": "ceph_vg1"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:        }
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:    ],
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:    "2": [
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:        {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "devices": [
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "/dev/loop5"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            ],
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_name": "ceph_lv2",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_size": "21470642176",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "name": "ceph_lv2",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "tags": {
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.cluster_name": "ceph",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.crush_device_class": "",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.encrypted": "0",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osd_id": "2",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.type": "block",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:                "ceph.vdo": "0"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            },
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "type": "block",
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:            "vg_name": "ceph_vg2"
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:        }
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]:    ]
Oct  1 10:10:13 np0005464214 adoring_sanderson[303570]: }
Oct  1 10:10:13 np0005464214 systemd[1]: libpod-fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320.scope: Deactivated successfully.
Oct  1 10:10:13 np0005464214 podman[303579]: 2025-10-01 14:10:13.395928299 +0000 UTC m=+0.023147976 container died fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:10:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2-merged.mount: Deactivated successfully.
Oct  1 10:10:13 np0005464214 podman[303579]: 2025-10-01 14:10:13.441852147 +0000 UTC m=+0.069071754 container remove fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  1 10:10:13 np0005464214 systemd[1]: libpod-conmon-fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320.scope: Deactivated successfully.
Oct  1 10:10:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.203094433 +0000 UTC m=+0.070775127 container create 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:10:14 np0005464214 systemd[1]: Started libpod-conmon-8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e.scope.
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.176462218 +0000 UTC m=+0.044142982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:10:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.312238978 +0000 UTC m=+0.179919682 container init 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.323547907 +0000 UTC m=+0.191228601 container start 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.327844974 +0000 UTC m=+0.195525748 container attach 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:10:14 np0005464214 zen_wright[303751]: 167 167
Oct  1 10:10:14 np0005464214 systemd[1]: libpod-8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e.scope: Deactivated successfully.
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.330415875 +0000 UTC m=+0.198096569 container died 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:10:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-355a0af31372412ca18cc4f7a974ba9d02d92ce71b028a1130ea0f07ec8b7c75-merged.mount: Deactivated successfully.
Oct  1 10:10:14 np0005464214 podman[303734]: 2025-10-01 14:10:14.387425814 +0000 UTC m=+0.255106518 container remove 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 10:10:14 np0005464214 systemd[1]: libpod-conmon-8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e.scope: Deactivated successfully.
Oct  1 10:10:14 np0005464214 podman[303775]: 2025-10-01 14:10:14.645166447 +0000 UTC m=+0.066650467 container create 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:10:14 np0005464214 systemd[1]: Started libpod-conmon-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope.
Oct  1 10:10:14 np0005464214 podman[303775]: 2025-10-01 14:10:14.62445558 +0000 UTC m=+0.045939580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:10:14 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:14 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:10:14 np0005464214 podman[303775]: 2025-10-01 14:10:14.754210258 +0000 UTC m=+0.175694328 container init 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:10:14 np0005464214 podman[303775]: 2025-10-01 14:10:14.768941266 +0000 UTC m=+0.190425276 container start 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 10:10:14 np0005464214 podman[303775]: 2025-10-01 14:10:14.772626033 +0000 UTC m=+0.194110073 container attach 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 10:10:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct  1 10:10:15 np0005464214 strange_williams[303792]: {
Oct  1 10:10:15 np0005464214 strange_williams[303792]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "osd_id": 0,
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "type": "bluestore"
Oct  1 10:10:15 np0005464214 strange_williams[303792]:    },
Oct  1 10:10:15 np0005464214 strange_williams[303792]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "osd_id": 2,
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "type": "bluestore"
Oct  1 10:10:15 np0005464214 strange_williams[303792]:    },
Oct  1 10:10:15 np0005464214 strange_williams[303792]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "osd_id": 1,
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:10:15 np0005464214 strange_williams[303792]:        "type": "bluestore"
Oct  1 10:10:15 np0005464214 strange_williams[303792]:    }
Oct  1 10:10:15 np0005464214 strange_williams[303792]: }
Oct  1 10:10:15 np0005464214 systemd[1]: libpod-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope: Deactivated successfully.
Oct  1 10:10:15 np0005464214 podman[303775]: 2025-10-01 14:10:15.781838011 +0000 UTC m=+1.203322001 container died 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:10:15 np0005464214 systemd[1]: libpod-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope: Consumed 1.013s CPU time.
Oct  1 10:10:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b-merged.mount: Deactivated successfully.
Oct  1 10:10:15 np0005464214 podman[303775]: 2025-10-01 14:10:15.85392257 +0000 UTC m=+1.275406590 container remove 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:10:15 np0005464214 systemd[1]: libpod-conmon-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope: Deactivated successfully.
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:10:15 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:15 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 11fc248e-18b8-4e5b-a0f4-e5d515385773 does not exist
Oct  1 10:10:15 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4abfbe65-b35d-44eb-90ab-1dc60847512a does not exist
Oct  1 10:10:16 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:16 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 25 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 461 KiB/s rd, 102 B/s wr, 8 op/s
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:10:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:10:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:18 np0005464214 nova_compute[260022]: 2025-10-01 14:10:18.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct  1 10:10:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct  1 10:10:18 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct  1 10:10:19 np0005464214 nova_compute[260022]: 2025-10-01 14:10:19.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 21 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 10:10:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 21 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 10:10:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct  1 10:10:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct  1 10:10:22 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct  1 10:10:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct  1 10:10:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.384 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.386 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.386 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:10:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct  1 10:10:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:10:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465569613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:10:27 np0005464214 nova_compute[260022]: 2025-10-01 14:10:27.866 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:10:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct  1 10:10:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct  1 10:10:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.130 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.132 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4984MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.132 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.133 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.233 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.266 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.267 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.267 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.511 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:10:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:10:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401306405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.961 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:10:28 np0005464214 nova_compute[260022]: 2025-10-01 14:10:28.969 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:10:29 np0005464214 nova_compute[260022]: 2025-10-01 14:10:29.000 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:10:29 np0005464214 nova_compute[260022]: 2025-10-01 14:10:29.003 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:10:29 np0005464214 nova_compute[260022]: 2025-10-01 14:10:29.003 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:10:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  1 10:10:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct  1 10:10:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct  1 10:10:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct  1 10:10:30 np0005464214 nova_compute[260022]: 2025-10-01 14:10:30.991 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:30 np0005464214 nova_compute[260022]: 2025-10-01 14:10:30.992 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:30 np0005464214 nova_compute[260022]: 2025-10-01 14:10:30.992 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:10:31 np0005464214 nova_compute[260022]: 2025-10-01 14:10:31.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  1 10:10:34 np0005464214 nova_compute[260022]: 2025-10-01 14:10:34.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  1 10:10:36 np0005464214 nova_compute[260022]: 2025-10-01 14:10:36.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:37 np0005464214 nova_compute[260022]: 2025-10-01 14:10:37.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:37 np0005464214 nova_compute[260022]: 2025-10-01 14:10:37.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:10:37 np0005464214 nova_compute[260022]: 2025-10-01 14:10:37.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:10:37 np0005464214 nova_compute[260022]: 2025-10-01 14:10:37.367 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:10:37 np0005464214 nova_compute[260022]: 2025-10-01 14:10:37.368 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 15 op/s
Oct  1 10:10:37 np0005464214 podman[303934]: 2025-10-01 14:10:37.553221305 +0000 UTC m=+0.089288166 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct  1 10:10:37 np0005464214 podman[303932]: 2025-10-01 14:10:37.561321952 +0000 UTC m=+0.100605675 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  1 10:10:37 np0005464214 podman[303933]: 2025-10-01 14:10:37.571252987 +0000 UTC m=+0.116304353 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:10:37 np0005464214 podman[303931]: 2025-10-01 14:10:37.601421014 +0000 UTC m=+0.145724887 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923)
Oct  1 10:10:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  1 10:10:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct  1 10:10:42 np0005464214 nova_compute[260022]: 2025-10-01 14:10:42.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:42 np0005464214 nova_compute[260022]: 2025-10-01 14:10:42.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 10:10:42 np0005464214 nova_compute[260022]: 2025-10-01 14:10:42.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 10:10:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  1 10:10:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:10:47
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta']
Oct  1 10:10:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:10:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:10:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:10:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:51 np0005464214 nova_compute[260022]: 2025-10-01 14:10:51.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:51 np0005464214 nova_compute[260022]: 2025-10-01 14:10:51.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 10:10:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:10:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132725017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:10:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:10:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132725017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:10:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:57 np0005464214 nova_compute[260022]: 2025-10-01 14:10:57.360 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:10:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:10:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:10:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Oct  1 10:11:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:08 np0005464214 podman[304017]: 2025-10-01 14:11:08.561065387 +0000 UTC m=+0.100975477 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct  1 10:11:08 np0005464214 podman[304019]: 2025-10-01 14:11:08.567518831 +0000 UTC m=+0.092489967 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  1 10:11:08 np0005464214 podman[304018]: 2025-10-01 14:11:08.575347881 +0000 UTC m=+0.110887452 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:11:08 np0005464214 podman[304016]: 2025-10-01 14:11:08.59235373 +0000 UTC m=+0.137745184 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct  1 10:11:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  1 10:11:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  1 10:11:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:11:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:11:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:11:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:11:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:11:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:11:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  1 10:11:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:11:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:11:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:17 np0005464214 podman[304372]: 2025-10-01 14:11:17.978523122 +0000 UTC m=+0.067133373 container create 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:11:18 np0005464214 systemd[1]: Started libpod-conmon-95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb.scope.
Oct  1 10:11:18 np0005464214 podman[304372]: 2025-10-01 14:11:17.952193225 +0000 UTC m=+0.040803526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:18 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:18 np0005464214 podman[304372]: 2025-10-01 14:11:18.079237259 +0000 UTC m=+0.167847540 container init 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:11:18 np0005464214 podman[304372]: 2025-10-01 14:11:18.087504581 +0000 UTC m=+0.176114802 container start 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:11:18 np0005464214 podman[304372]: 2025-10-01 14:11:18.090722714 +0000 UTC m=+0.179332965 container attach 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:11:18 np0005464214 nostalgic_brown[304388]: 167 167
Oct  1 10:11:18 np0005464214 systemd[1]: libpod-95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb.scope: Deactivated successfully.
Oct  1 10:11:18 np0005464214 podman[304372]: 2025-10-01 14:11:18.092717707 +0000 UTC m=+0.181327958 container died 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 10:11:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-98f15441bdc1ae3e250b0ce5aa56404dfff5becac8024496d0dd08cda7605e95-merged.mount: Deactivated successfully.
Oct  1 10:11:18 np0005464214 podman[304372]: 2025-10-01 14:11:18.138962354 +0000 UTC m=+0.227572605 container remove 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 10:11:18 np0005464214 systemd[1]: libpod-conmon-95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb.scope: Deactivated successfully.
Oct  1 10:11:18 np0005464214 podman[304412]: 2025-10-01 14:11:18.343913291 +0000 UTC m=+0.057252658 container create 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 10:11:18 np0005464214 systemd[1]: Started libpod-conmon-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope.
Oct  1 10:11:18 np0005464214 podman[304412]: 2025-10-01 14:11:18.323025358 +0000 UTC m=+0.036364705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:18 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:18 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:18 np0005464214 podman[304412]: 2025-10-01 14:11:18.438663098 +0000 UTC m=+0.152002505 container init 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:11:18 np0005464214 podman[304412]: 2025-10-01 14:11:18.454159111 +0000 UTC m=+0.167498468 container start 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:11:18 np0005464214 podman[304412]: 2025-10-01 14:11:18.458611152 +0000 UTC m=+0.171950469 container attach 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:11:19 np0005464214 nova_compute[260022]: 2025-10-01 14:11:19.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]: [
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:    {
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "available": false,
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "ceph_device": false,
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "lsm_data": {},
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "lvs": [],
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "path": "/dev/sr0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "rejected_reasons": [
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "Has a FileSystem",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "Insufficient space (<5GB)"
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        ],
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        "sys_api": {
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "actuators": null,
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "device_nodes": "sr0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "devname": "sr0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "human_readable_size": "482.00 KB",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "id_bus": "ata",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "model": "QEMU DVD-ROM",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "nr_requests": "2",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "parent": "/dev/sr0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "partitions": {},
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "path": "/dev/sr0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "removable": "1",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "rev": "2.5+",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "ro": "0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "rotational": "0",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "sas_address": "",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "sas_device_handle": "",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "scheduler_mode": "mq-deadline",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "sectors": 0,
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "sectorsize": "2048",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "size": 493568.0,
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "support_discard": "2048",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "type": "disk",
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:            "vendor": "QEMU"
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:        }
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]:    }
Oct  1 10:11:20 np0005464214 vigilant_benz[304428]: ]
Oct  1 10:11:20 np0005464214 systemd[1]: libpod-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope: Deactivated successfully.
Oct  1 10:11:20 np0005464214 systemd[1]: libpod-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope: Consumed 1.746s CPU time.
Oct  1 10:11:20 np0005464214 podman[304412]: 2025-10-01 14:11:20.115880853 +0000 UTC m=+1.829220210 container died 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:11:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd-merged.mount: Deactivated successfully.
Oct  1 10:11:20 np0005464214 podman[304412]: 2025-10-01 14:11:20.186682791 +0000 UTC m=+1.900022138 container remove 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  1 10:11:20 np0005464214 systemd[1]: libpod-conmon-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope: Deactivated successfully.
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:20 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 30316e91-8959-4118-bf12-031c2b442ee7 does not exist
Oct  1 10:11:20 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev d8f3a92a-b7df-4812-8066-bb97db634bbe does not exist
Oct  1 10:11:20 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 74161024-75cb-45bd-b96a-be7e8afb5228 does not exist
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:11:20 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.112883184 +0000 UTC m=+0.061535614 container create 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 10:11:21 np0005464214 systemd[1]: Started libpod-conmon-8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6.scope.
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.089922785 +0000 UTC m=+0.038575255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.214339875 +0000 UTC m=+0.162992335 container init 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.229425334 +0000 UTC m=+0.178077724 container start 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.234378861 +0000 UTC m=+0.183031301 container attach 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:21 np0005464214 amazing_ptolemy[306808]: 167 167
Oct  1 10:11:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:21 np0005464214 systemd[1]: libpod-8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6.scope: Deactivated successfully.
Oct  1 10:11:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:11:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:21 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.239569166 +0000 UTC m=+0.188221556 container died 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:21 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e902bce89963ff0edb57332d4a27675cb1cc564ae5a9aa9adcaf0e7eb29bdbf3-merged.mount: Deactivated successfully.
Oct  1 10:11:21 np0005464214 podman[306792]: 2025-10-01 14:11:21.290994459 +0000 UTC m=+0.239646849 container remove 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 10:11:21 np0005464214 systemd[1]: libpod-conmon-8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6.scope: Deactivated successfully.
Oct  1 10:11:21 np0005464214 podman[306831]: 2025-10-01 14:11:21.509688791 +0000 UTC m=+0.066329567 container create f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:11:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Oct  1 10:11:21 np0005464214 podman[306831]: 2025-10-01 14:11:21.477537411 +0000 UTC m=+0.034178237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:21 np0005464214 systemd[1]: Started libpod-conmon-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope.
Oct  1 10:11:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:21 np0005464214 podman[306831]: 2025-10-01 14:11:21.621931724 +0000 UTC m=+0.178572490 container init f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:11:21 np0005464214 podman[306831]: 2025-10-01 14:11:21.639498092 +0000 UTC m=+0.196138868 container start f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:11:21 np0005464214 podman[306831]: 2025-10-01 14:11:21.643886472 +0000 UTC m=+0.200527218 container attach f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 10:11:22 np0005464214 jolly_northcutt[306847]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:11:22 np0005464214 jolly_northcutt[306847]: --> relative data size: 1.0
Oct  1 10:11:22 np0005464214 jolly_northcutt[306847]: --> All data devices are unavailable
Oct  1 10:11:22 np0005464214 systemd[1]: libpod-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope: Deactivated successfully.
Oct  1 10:11:22 np0005464214 podman[306831]: 2025-10-01 14:11:22.803662969 +0000 UTC m=+1.360303735 container died f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:22 np0005464214 systemd[1]: libpod-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope: Consumed 1.119s CPU time.
Oct  1 10:11:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87-merged.mount: Deactivated successfully.
Oct  1 10:11:22 np0005464214 podman[306831]: 2025-10-01 14:11:22.882471911 +0000 UTC m=+1.439112687 container remove f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:22 np0005464214 systemd[1]: libpod-conmon-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope: Deactivated successfully.
Oct  1 10:11:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.673968208 +0000 UTC m=+0.063968972 container create f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 10:11:23 np0005464214 systemd[1]: Started libpod-conmon-f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10.scope.
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.644434491 +0000 UTC m=+0.034435345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.772118314 +0000 UTC m=+0.162119168 container init f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.784196397 +0000 UTC m=+0.174197191 container start f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.78837243 +0000 UTC m=+0.178373224 container attach f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:23 np0005464214 awesome_mirzakhani[307045]: 167 167
Oct  1 10:11:23 np0005464214 systemd[1]: libpod-f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10.scope: Deactivated successfully.
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.792725348 +0000 UTC m=+0.182726172 container died f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 10:11:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0ddb216a67587174f8d89092e4a6d51fff936d9ea1cb001ef8feadab0b6811ca-merged.mount: Deactivated successfully.
Oct  1 10:11:23 np0005464214 podman[307029]: 2025-10-01 14:11:23.842761436 +0000 UTC m=+0.232762230 container remove f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:11:23 np0005464214 systemd[1]: libpod-conmon-f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10.scope: Deactivated successfully.
Oct  1 10:11:24 np0005464214 podman[307069]: 2025-10-01 14:11:24.081313 +0000 UTC m=+0.065130269 container create 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:11:24 np0005464214 systemd[1]: Started libpod-conmon-5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703.scope.
Oct  1 10:11:24 np0005464214 podman[307069]: 2025-10-01 14:11:24.056085829 +0000 UTC m=+0.039903158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:24 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:24 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:24 np0005464214 podman[307069]: 2025-10-01 14:11:24.183787372 +0000 UTC m=+0.167604721 container init 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:11:24 np0005464214 podman[307069]: 2025-10-01 14:11:24.197822338 +0000 UTC m=+0.181639617 container start 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:11:24 np0005464214 podman[307069]: 2025-10-01 14:11:24.201912308 +0000 UTC m=+0.185729647 container attach 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]: {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:    "0": [
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:        {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "devices": [
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "/dev/loop3"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            ],
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_name": "ceph_lv0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_size": "21470642176",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "name": "ceph_lv0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "tags": {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cluster_name": "ceph",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.crush_device_class": "",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.encrypted": "0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osd_id": "0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.type": "block",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.vdo": "0"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            },
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "type": "block",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "vg_name": "ceph_vg0"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:        }
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:    ],
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:    "1": [
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:        {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "devices": [
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "/dev/loop4"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            ],
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_name": "ceph_lv1",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_size": "21470642176",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "name": "ceph_lv1",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "tags": {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cluster_name": "ceph",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.crush_device_class": "",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.encrypted": "0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osd_id": "1",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.type": "block",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.vdo": "0"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            },
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "type": "block",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "vg_name": "ceph_vg1"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:        }
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:    ],
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:    "2": [
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:        {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "devices": [
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "/dev/loop5"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            ],
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_name": "ceph_lv2",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_size": "21470642176",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "name": "ceph_lv2",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "tags": {
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.cluster_name": "ceph",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.crush_device_class": "",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.encrypted": "0",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osd_id": "2",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.type": "block",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:                "ceph.vdo": "0"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            },
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "type": "block",
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:            "vg_name": "ceph_vg2"
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:        }
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]:    ]
Oct  1 10:11:24 np0005464214 busy_hofstadter[307085]: }
Oct  1 10:11:24 np0005464214 systemd[1]: libpod-5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703.scope: Deactivated successfully.
Oct  1 10:11:24 np0005464214 podman[307069]: 2025-10-01 14:11:24.977847301 +0000 UTC m=+0.961664580 container died 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:11:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec-merged.mount: Deactivated successfully.
Oct  1 10:11:25 np0005464214 podman[307069]: 2025-10-01 14:11:25.046441618 +0000 UTC m=+1.030258877 container remove 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 10:11:25 np0005464214 systemd[1]: libpod-conmon-5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703.scope: Deactivated successfully.
Oct  1 10:11:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:25 np0005464214 podman[307250]: 2025-10-01 14:11:25.902152965 +0000 UTC m=+0.064570691 container create 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:11:25 np0005464214 systemd[1]: Started libpod-conmon-899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217.scope.
Oct  1 10:11:25 np0005464214 podman[307250]: 2025-10-01 14:11:25.876339136 +0000 UTC m=+0.038756912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:26 np0005464214 podman[307250]: 2025-10-01 14:11:26.004178304 +0000 UTC m=+0.166596080 container init 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  1 10:11:26 np0005464214 podman[307250]: 2025-10-01 14:11:26.016240327 +0000 UTC m=+0.178658043 container start 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:11:26 np0005464214 podman[307250]: 2025-10-01 14:11:26.022136314 +0000 UTC m=+0.184554100 container attach 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 10:11:26 np0005464214 nice_cerf[307266]: 167 167
Oct  1 10:11:26 np0005464214 systemd[1]: libpod-899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217.scope: Deactivated successfully.
Oct  1 10:11:26 np0005464214 podman[307250]: 2025-10-01 14:11:26.024382775 +0000 UTC m=+0.186800541 container died 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:11:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6eab6b679010a3f03567aa91e30e52a1b817a401add7b37220868c7ecd416e4a-merged.mount: Deactivated successfully.
Oct  1 10:11:26 np0005464214 podman[307250]: 2025-10-01 14:11:26.076256052 +0000 UTC m=+0.238673748 container remove 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 10:11:26 np0005464214 systemd[1]: libpod-conmon-899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217.scope: Deactivated successfully.
Oct  1 10:11:26 np0005464214 podman[307288]: 2025-10-01 14:11:26.334990366 +0000 UTC m=+0.067865986 container create aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:11:26 np0005464214 systemd[1]: Started libpod-conmon-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope.
Oct  1 10:11:26 np0005464214 podman[307288]: 2025-10-01 14:11:26.306799321 +0000 UTC m=+0.039675001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:11:26 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:11:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:26 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:11:26 np0005464214 podman[307288]: 2025-10-01 14:11:26.452879249 +0000 UTC m=+0.185754869 container init aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:11:26 np0005464214 podman[307288]: 2025-10-01 14:11:26.464409865 +0000 UTC m=+0.197285495 container start aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 10:11:26 np0005464214 podman[307288]: 2025-10-01 14:11:26.468546206 +0000 UTC m=+0.201421836 container attach aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]: {
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "osd_id": 0,
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "type": "bluestore"
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:    },
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "osd_id": 2,
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "type": "bluestore"
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:    },
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "osd_id": 1,
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:        "type": "bluestore"
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]:    }
Oct  1 10:11:27 np0005464214 vigorous_liskov[307305]: }
Oct  1 10:11:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:27 np0005464214 systemd[1]: libpod-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope: Deactivated successfully.
Oct  1 10:11:27 np0005464214 podman[307288]: 2025-10-01 14:11:27.5283037 +0000 UTC m=+1.261179330 container died aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:11:27 np0005464214 systemd[1]: libpod-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope: Consumed 1.075s CPU time.
Oct  1 10:11:27 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0-merged.mount: Deactivated successfully.
Oct  1 10:11:27 np0005464214 podman[307288]: 2025-10-01 14:11:27.58721971 +0000 UTC m=+1.320095300 container remove aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 10:11:27 np0005464214 systemd[1]: libpod-conmon-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope: Deactivated successfully.
Oct  1 10:11:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:11:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:11:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 47512944-7147-40bd-b3a9-0145af19326e does not exist
Oct  1 10:11:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev a65d0a9c-bc0b-41ce-a888-395a980a1b96 does not exist
Oct  1 10:11:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.387 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.387 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.387 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:11:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:29 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:11:29 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150587183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:11:29 np0005464214 nova_compute[260022]: 2025-10-01 14:11:29.830 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.036 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.038 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.038 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.038 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.126 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.141 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.141 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.142 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.315 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.418 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.419 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.431 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.460 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.504 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:11:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:11:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747894677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.930 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.936 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.953 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.957 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:11:30 np0005464214 nova_compute[260022]: 2025-10-01 14:11:30.957 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:11:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:32 np0005464214 nova_compute[260022]: 2025-10-01 14:11:32.954 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:32 np0005464214 nova_compute[260022]: 2025-10-01 14:11:32.955 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:34 np0005464214 nova_compute[260022]: 2025-10-01 14:11:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:36 np0005464214 nova_compute[260022]: 2025-10-01 14:11:36.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:36 np0005464214 nova_compute[260022]: 2025-10-01 14:11:36.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:37 np0005464214 nova_compute[260022]: 2025-10-01 14:11:37.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:38 np0005464214 nova_compute[260022]: 2025-10-01 14:11:38.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:11:38 np0005464214 nova_compute[260022]: 2025-10-01 14:11:38.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:11:38 np0005464214 nova_compute[260022]: 2025-10-01 14:11:38.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:11:38 np0005464214 nova_compute[260022]: 2025-10-01 14:11:38.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:11:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:39 np0005464214 podman[307445]: 2025-10-01 14:11:39.536382188 +0000 UTC m=+0.065663765 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 10:11:39 np0005464214 podman[307443]: 2025-10-01 14:11:39.537011068 +0000 UTC m=+0.076578422 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:11:39 np0005464214 podman[307444]: 2025-10-01 14:11:39.574807458 +0000 UTC m=+0.106365137 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:11:39 np0005464214 podman[307442]: 2025-10-01 14:11:39.581661605 +0000 UTC m=+0.121005792 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 10:11:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:11:47
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct  1 10:11:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:11:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:11:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:11:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282206593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:11:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:11:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282206593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:11:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.321674) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916321699, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2091, "num_deletes": 257, "total_data_size": 3489892, "memory_usage": 3551808, "flush_reason": "Manual Compaction"}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916339798, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3411670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39968, "largest_seqno": 42058, "table_properties": {"data_size": 3402017, "index_size": 6147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19408, "raw_average_key_size": 20, "raw_value_size": 3382848, "raw_average_value_size": 3557, "num_data_blocks": 272, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327694, "oldest_key_time": 1759327694, "file_creation_time": 1759327916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 18356 microseconds, and 9308 cpu microseconds.
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.340018) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3411670 bytes OK
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.340086) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.341564) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.341581) EVENT_LOG_v1 {"time_micros": 1759327916341576, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.341609) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3481106, prev total WAL file size 3481106, number of live WAL files 2.
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.343224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3331KB)], [95(6557KB)]
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916343302, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10126749, "oldest_snapshot_seqno": -1}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5882 keys, 8376710 bytes, temperature: kUnknown
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916381421, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 8376710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8337984, "index_size": 22936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 152279, "raw_average_key_size": 25, "raw_value_size": 8231972, "raw_average_value_size": 1399, "num_data_blocks": 910, "num_entries": 5882, "num_filter_entries": 5882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.381805) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 8376710 bytes
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.383276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 264.8 rd, 219.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(5.4) write-amplify(2.5) OK, records in: 6408, records dropped: 526 output_compression: NoCompression
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.383303) EVENT_LOG_v1 {"time_micros": 1759327916383286, "job": 56, "event": "compaction_finished", "compaction_time_micros": 38249, "compaction_time_cpu_micros": 19097, "output_level": 6, "num_output_files": 1, "total_output_size": 8376710, "num_input_records": 6408, "num_output_records": 5882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916384582, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916386415, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.343088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:11:56 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:11:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:11:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:11:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:10 np0005464214 podman[307526]: 2025-10-01 14:12:10.507877348 +0000 UTC m=+0.056933297 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  1 10:12:10 np0005464214 podman[307527]: 2025-10-01 14:12:10.533455391 +0000 UTC m=+0.076373345 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923)
Oct  1 10:12:10 np0005464214 podman[307533]: 2025-10-01 14:12:10.550058448 +0000 UTC m=+0.088010085 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:12:10 np0005464214 podman[307525]: 2025-10-01 14:12:10.579440831 +0000 UTC m=+0.124387900 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Oct  1 10:12:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:12:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:12:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:12:12.338 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:12:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:12:12.338 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:12:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct  1 10:12:14 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct  1 10:12:14 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct  1 10:12:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct  1 10:12:15 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct  1 10:12:15 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 8.4 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 39 op/s
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:12:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:12:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:19 np0005464214 nova_compute[260022]: 2025-10-01 14:12:19.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  1 10:12:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  1 10:12:22 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Oct  1 10:12:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct  1 10:12:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Oct  1 10:12:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:12:28 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4b661462-b6c3-481a-81f9-80ef15731ef0 does not exist
Oct  1 10:12:28 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 8325ef5d-0bbb-44cb-9fe6-79799f818864 does not exist
Oct  1 10:12:28 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 60d36cd9-7da2-457d-a750-2da46646cc1a does not exist
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:12:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.473319552 +0000 UTC m=+0.083304516 container create 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  1 10:12:29 np0005464214 systemd[1]: Started libpod-conmon-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope.
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.436524393 +0000 UTC m=+0.046509427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:12:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 2.7 MiB/s wr, 5 op/s
Oct  1 10:12:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.572084367 +0000 UTC m=+0.182069401 container init 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.582755986 +0000 UTC m=+0.192740960 container start 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.586602488 +0000 UTC m=+0.196587472 container attach 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:12:29 np0005464214 mystifying_shannon[307894]: 167 167
Oct  1 10:12:29 np0005464214 systemd[1]: libpod-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope: Deactivated successfully.
Oct  1 10:12:29 np0005464214 conmon[307894]: conmon 7c539686d3fe395db8ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope/container/memory.events
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.593220678 +0000 UTC m=+0.203205662 container died 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  1 10:12:29 np0005464214 systemd[1]: var-lib-containers-storage-overlay-fc4e9b63f5609a0efc54d6d82b33998e9bc2020f0098239692bd8738e0cce8b2-merged.mount: Deactivated successfully.
Oct  1 10:12:29 np0005464214 podman[307878]: 2025-10-01 14:12:29.646716486 +0000 UTC m=+0.256701420 container remove 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  1 10:12:29 np0005464214 systemd[1]: libpod-conmon-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope: Deactivated successfully.
Oct  1 10:12:29 np0005464214 podman[307918]: 2025-10-01 14:12:29.852405276 +0000 UTC m=+0.043777131 container create f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:12:29 np0005464214 systemd[1]: Started libpod-conmon-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope.
Oct  1 10:12:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:12:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:29 np0005464214 podman[307918]: 2025-10-01 14:12:29.83711318 +0000 UTC m=+0.028485065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:12:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:29 np0005464214 podman[307918]: 2025-10-01 14:12:29.944228231 +0000 UTC m=+0.135600186 container init f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:12:29 np0005464214 podman[307918]: 2025-10-01 14:12:29.964183745 +0000 UTC m=+0.155555640 container start f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  1 10:12:29 np0005464214 podman[307918]: 2025-10-01 14:12:29.969014728 +0000 UTC m=+0.160386673 container attach f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.377 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:12:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:12:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203281331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:12:30 np0005464214 nova_compute[260022]: 2025-10-01 14:12:30.832 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:12:31 np0005464214 interesting_wright[307934]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:12:31 np0005464214 interesting_wright[307934]: --> relative data size: 1.0
Oct  1 10:12:31 np0005464214 interesting_wright[307934]: --> All data devices are unavailable
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.027 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.028 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4962MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.028 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.029 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:12:31 np0005464214 systemd[1]: libpod-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope: Deactivated successfully.
Oct  1 10:12:31 np0005464214 systemd[1]: libpod-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope: Consumed 1.026s CPU time.
Oct  1 10:12:31 np0005464214 podman[307918]: 2025-10-01 14:12:31.056068388 +0000 UTC m=+1.247440243 container died f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:12:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700-merged.mount: Deactivated successfully.
Oct  1 10:12:31 np0005464214 podman[307918]: 2025-10-01 14:12:31.110068921 +0000 UTC m=+1.301440786 container remove f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.117 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:12:31 np0005464214 systemd[1]: libpod-conmon-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope: Deactivated successfully.
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.137 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.137 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.137 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.182 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:12:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:12:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986016335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.615 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.624 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.643 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.645 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:12:31 np0005464214 nova_compute[260022]: 2025-10-01 14:12:31.646 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.8055503 +0000 UTC m=+0.062396972 container create 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 10:12:31 np0005464214 systemd[1]: Started libpod-conmon-251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2.scope.
Oct  1 10:12:31 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.781152596 +0000 UTC m=+0.037999318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.88367985 +0000 UTC m=+0.140526542 container init 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.890206298 +0000 UTC m=+0.147052970 container start 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.894493904 +0000 UTC m=+0.151340636 container attach 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 10:12:31 np0005464214 wonderful_wing[308177]: 167 167
Oct  1 10:12:31 np0005464214 systemd[1]: libpod-251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2.scope: Deactivated successfully.
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.897409356 +0000 UTC m=+0.154256038 container died 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct  1 10:12:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1ba4a732b8be5bf5d292dd78d042413295666695a23026614d5b3fe0e4e2a759-merged.mount: Deactivated successfully.
Oct  1 10:12:31 np0005464214 podman[308161]: 2025-10-01 14:12:31.94164055 +0000 UTC m=+0.198487192 container remove 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:12:31 np0005464214 systemd[1]: libpod-conmon-251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2.scope: Deactivated successfully.
Oct  1 10:12:32 np0005464214 podman[308201]: 2025-10-01 14:12:32.141200466 +0000 UTC m=+0.053538261 container create dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 10:12:32 np0005464214 systemd[1]: Started libpod-conmon-dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68.scope.
Oct  1 10:12:32 np0005464214 podman[308201]: 2025-10-01 14:12:32.113772035 +0000 UTC m=+0.026109860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:12:32 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:12:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:32 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:32 np0005464214 podman[308201]: 2025-10-01 14:12:32.257781016 +0000 UTC m=+0.170118821 container init dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  1 10:12:32 np0005464214 podman[308201]: 2025-10-01 14:12:32.26984705 +0000 UTC m=+0.182184875 container start dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:12:32 np0005464214 podman[308201]: 2025-10-01 14:12:32.274461566 +0000 UTC m=+0.186799371 container attach dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:12:32 np0005464214 nova_compute[260022]: 2025-10-01 14:12:32.647 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:32 np0005464214 nova_compute[260022]: 2025-10-01 14:12:32.648 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:32 np0005464214 nova_compute[260022]: 2025-10-01 14:12:32.649 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:12:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]: {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:    "0": [
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:        {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "devices": [
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "/dev/loop3"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            ],
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_name": "ceph_lv0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_size": "21470642176",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "name": "ceph_lv0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "tags": {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cluster_name": "ceph",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.crush_device_class": "",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.encrypted": "0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osd_id": "0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.type": "block",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.vdo": "0"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            },
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "type": "block",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "vg_name": "ceph_vg0"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:        }
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:    ],
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:    "1": [
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:        {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "devices": [
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "/dev/loop4"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            ],
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_name": "ceph_lv1",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_size": "21470642176",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "name": "ceph_lv1",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "tags": {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cluster_name": "ceph",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.crush_device_class": "",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.encrypted": "0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osd_id": "1",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.type": "block",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.vdo": "0"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            },
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "type": "block",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "vg_name": "ceph_vg1"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:        }
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:    ],
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:    "2": [
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:        {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "devices": [
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "/dev/loop5"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            ],
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_name": "ceph_lv2",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_size": "21470642176",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "name": "ceph_lv2",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "tags": {
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.cluster_name": "ceph",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.crush_device_class": "",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.encrypted": "0",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osd_id": "2",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.type": "block",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:                "ceph.vdo": "0"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            },
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "type": "block",
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:            "vg_name": "ceph_vg2"
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:        }
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]:    ]
Oct  1 10:12:33 np0005464214 hopeful_dhawan[308217]: }
Oct  1 10:12:33 np0005464214 systemd[1]: libpod-dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68.scope: Deactivated successfully.
Oct  1 10:12:33 np0005464214 podman[308201]: 2025-10-01 14:12:33.067189982 +0000 UTC m=+0.979527777 container died dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:12:33 np0005464214 systemd[1]: var-lib-containers-storage-overlay-d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207-merged.mount: Deactivated successfully.
Oct  1 10:12:33 np0005464214 podman[308201]: 2025-10-01 14:12:33.125184424 +0000 UTC m=+1.037522219 container remove dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 10:12:33 np0005464214 systemd[1]: libpod-conmon-dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68.scope: Deactivated successfully.
Oct  1 10:12:33 np0005464214 nova_compute[260022]: 2025-10-01 14:12:33.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:33 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:12:33.710 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:12:33 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:12:33.712 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:12:33 np0005464214 podman[308379]: 2025-10-01 14:12:33.963767245 +0000 UTC m=+0.073177074 container create 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:12:34 np0005464214 systemd[1]: Started libpod-conmon-707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b.scope.
Oct  1 10:12:34 np0005464214 podman[308379]: 2025-10-01 14:12:33.936029454 +0000 UTC m=+0.045439343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:12:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:12:34 np0005464214 podman[308379]: 2025-10-01 14:12:34.077759803 +0000 UTC m=+0.187169732 container init 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:12:34 np0005464214 podman[308379]: 2025-10-01 14:12:34.088309129 +0000 UTC m=+0.197718958 container start 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:12:34 np0005464214 podman[308379]: 2025-10-01 14:12:34.092659687 +0000 UTC m=+0.202069616 container attach 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 10:12:34 np0005464214 mystifying_elbakyan[308395]: 167 167
Oct  1 10:12:34 np0005464214 systemd[1]: libpod-707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b.scope: Deactivated successfully.
Oct  1 10:12:34 np0005464214 podman[308379]: 2025-10-01 14:12:34.097218921 +0000 UTC m=+0.206628760 container died 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:12:34 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b94264f0cbc97a1b45247f2f97b63eaa5bf4803e54012846e92225b9a1ad8b6f-merged.mount: Deactivated successfully.
Oct  1 10:12:34 np0005464214 podman[308379]: 2025-10-01 14:12:34.149814231 +0000 UTC m=+0.259224020 container remove 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  1 10:12:34 np0005464214 systemd[1]: libpod-conmon-707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b.scope: Deactivated successfully.
Oct  1 10:12:34 np0005464214 nova_compute[260022]: 2025-10-01 14:12:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:34 np0005464214 podman[308419]: 2025-10-01 14:12:34.328162723 +0000 UTC m=+0.031326056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:12:34 np0005464214 podman[308419]: 2025-10-01 14:12:34.445398135 +0000 UTC m=+0.148561388 container create 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:12:34 np0005464214 systemd[1]: Started libpod-conmon-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope.
Oct  1 10:12:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:12:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:12:34 np0005464214 podman[308419]: 2025-10-01 14:12:34.704331555 +0000 UTC m=+0.407494838 container init 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  1 10:12:34 np0005464214 podman[308419]: 2025-10-01 14:12:34.716098758 +0000 UTC m=+0.419262021 container start 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:12:34 np0005464214 podman[308419]: 2025-10-01 14:12:34.723850655 +0000 UTC m=+0.427013978 container attach 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 10:12:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]: {
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "osd_id": 0,
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "type": "bluestore"
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:    },
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "osd_id": 2,
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "type": "bluestore"
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:    },
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "osd_id": 1,
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:        "type": "bluestore"
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]:    }
Oct  1 10:12:35 np0005464214 eager_lederberg[308436]: }
Oct  1 10:12:35 np0005464214 systemd[1]: libpod-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope: Deactivated successfully.
Oct  1 10:12:35 np0005464214 podman[308419]: 2025-10-01 14:12:35.742643197 +0000 UTC m=+1.445806420 container died 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:12:35 np0005464214 systemd[1]: libpod-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope: Consumed 1.034s CPU time.
Oct  1 10:12:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044-merged.mount: Deactivated successfully.
Oct  1 10:12:36 np0005464214 podman[308419]: 2025-10-01 14:12:36.040985818 +0000 UTC m=+1.744149091 container remove 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:12:36 np0005464214 systemd[1]: libpod-conmon-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope: Deactivated successfully.
Oct  1 10:12:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:12:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:12:36 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:12:36 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:12:36 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 444afb38-9def-4805-b08e-2caa0cf06be1 does not exist
Oct  1 10:12:36 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4b2a4260-7777-4c9e-8298-254742c51710 does not exist
Oct  1 10:12:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:12:37 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:12:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:38 np0005464214 nova_compute[260022]: 2025-10-01 14:12:38.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:38 np0005464214 nova_compute[260022]: 2025-10-01 14:12:38.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:12:38 np0005464214 nova_compute[260022]: 2025-10-01 14:12:38.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:12:38 np0005464214 nova_compute[260022]: 2025-10-01 14:12:38.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:12:38 np0005464214 nova_compute[260022]: 2025-10-01 14:12:38.360 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:38 np0005464214 nova_compute[260022]: 2025-10-01 14:12:38.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:12:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct  1 10:12:41 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct  1 10:12:41 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct  1 10:12:41 np0005464214 podman[308536]: 2025-10-01 14:12:41.51765614 +0000 UTC m=+0.057480116 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  1 10:12:41 np0005464214 podman[308535]: 2025-10-01 14:12:41.519021483 +0000 UTC m=+0.062417732 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct  1 10:12:41 np0005464214 podman[308534]: 2025-10-01 14:12:41.524331902 +0000 UTC m=+0.071229192 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:12:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:41 np0005464214 podman[308533]: 2025-10-01 14:12:41.570938142 +0000 UTC m=+0.118643878 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  1 10:12:41 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:12:41.714 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:12:42 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:12:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:12:47
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct  1 10:12:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:12:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct  1 10:12:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct  1 10:12:48 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:12:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:12:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct  1 10:12:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  1 10:12:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Oct  1 10:12:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:12:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931144448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:12:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:12:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931144448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:12:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:12:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:12:57 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:12:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:01 np0005464214 nova_compute[260022]: 2025-10-01 14:13:01.357 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:02 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:07 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:13:12.338 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:13:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:13:12.339 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:13:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:13:12.339 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:13:12 np0005464214 podman[308618]: 2025-10-01 14:13:12.515508514 +0000 UTC m=+0.065738077 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:13:12 np0005464214 podman[308625]: 2025-10-01 14:13:12.53081059 +0000 UTC m=+0.073453623 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:13:12 np0005464214 podman[308617]: 2025-10-01 14:13:12.546568 +0000 UTC m=+0.102894557 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  1 10:13:12 np0005464214 podman[308619]: 2025-10-01 14:13:12.54656771 +0000 UTC m=+0.093698965 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 10:13:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:13:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:13:17 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:21 np0005464214 nova_compute[260022]: 2025-10-01 14:13:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.373 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.375 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.375 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:13:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:13:30 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123375352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:13:30 np0005464214 nova_compute[260022]: 2025-10-01 14:13:30.882 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.073 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.074 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5026MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.074 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.075 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.173 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.187 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.187 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.187 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.232 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:13:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:13:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644878861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.665 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.672 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.693 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.695 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:13:31 np0005464214 nova_compute[260022]: 2025-10-01 14:13:31.695 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:13:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:33 np0005464214 nova_compute[260022]: 2025-10-01 14:13:33.692 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:33 np0005464214 nova_compute[260022]: 2025-10-01 14:13:33.693 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:33 np0005464214 nova_compute[260022]: 2025-10-01 14:13:33.693 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:13:34 np0005464214 nova_compute[260022]: 2025-10-01 14:13:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:35 np0005464214 nova_compute[260022]: 2025-10-01 14:13:35.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:13:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:37 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:13:37 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:38 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1bb09119-f937-4a0d-be16-2a75d1999a88 does not exist
Oct  1 10:13:38 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 286ed1f2-f6b6-4b5e-ae4f-9bd6a4f14a19 does not exist
Oct  1 10:13:38 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e799a6af-8ab2-4d12-96ed-e567873bcdc4 does not exist
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:38 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:13:38 np0005464214 nova_compute[260022]: 2025-10-01 14:13:38.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.769895316 +0000 UTC m=+0.071738099 container create b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  1 10:13:38 np0005464214 systemd[1]: Started libpod-conmon-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope.
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.730279689 +0000 UTC m=+0.032122472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:13:38 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.867081021 +0000 UTC m=+0.168923864 container init b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.875308142 +0000 UTC m=+0.177150895 container start b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.879858256 +0000 UTC m=+0.181701079 container attach b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  1 10:13:38 np0005464214 adoring_dirac[309153]: 167 167
Oct  1 10:13:38 np0005464214 systemd[1]: libpod-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope: Deactivated successfully.
Oct  1 10:13:38 np0005464214 conmon[309153]: conmon b7954a4594e9520d7ab4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope/container/memory.events
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.884064 +0000 UTC m=+0.185906753 container died b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 10:13:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9d4de5148aa11e4e0c8199f8719592c0896606e4352930e2c555b24b6db758d9-merged.mount: Deactivated successfully.
Oct  1 10:13:38 np0005464214 podman[309136]: 2025-10-01 14:13:38.927282852 +0000 UTC m=+0.229125625 container remove b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:13:38 np0005464214 systemd[1]: libpod-conmon-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope: Deactivated successfully.
Oct  1 10:13:39 np0005464214 podman[309176]: 2025-10-01 14:13:39.146998437 +0000 UTC m=+0.058696614 container create 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:13:39 np0005464214 systemd[1]: Started libpod-conmon-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope.
Oct  1 10:13:39 np0005464214 podman[309176]: 2025-10-01 14:13:39.116913373 +0000 UTC m=+0.028611590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:13:39 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:13:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:39 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:39 np0005464214 podman[309176]: 2025-10-01 14:13:39.260626764 +0000 UTC m=+0.172324931 container init 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 10:13:39 np0005464214 podman[309176]: 2025-10-01 14:13:39.272699088 +0000 UTC m=+0.184397235 container start 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  1 10:13:39 np0005464214 podman[309176]: 2025-10-01 14:13:39.277073296 +0000 UTC m=+0.188771443 container attach 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:13:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:40 np0005464214 friendly_euclid[309193]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:13:40 np0005464214 friendly_euclid[309193]: --> relative data size: 1.0
Oct  1 10:13:40 np0005464214 friendly_euclid[309193]: --> All data devices are unavailable
Oct  1 10:13:40 np0005464214 nova_compute[260022]: 2025-10-01 14:13:40.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:40 np0005464214 nova_compute[260022]: 2025-10-01 14:13:40.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:13:40 np0005464214 nova_compute[260022]: 2025-10-01 14:13:40.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:13:40 np0005464214 nova_compute[260022]: 2025-10-01 14:13:40.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:13:40 np0005464214 nova_compute[260022]: 2025-10-01 14:13:40.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:13:40 np0005464214 systemd[1]: libpod-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope: Deactivated successfully.
Oct  1 10:13:40 np0005464214 systemd[1]: libpod-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope: Consumed 1.054s CPU time.
Oct  1 10:13:40 np0005464214 podman[309176]: 2025-10-01 14:13:40.365645884 +0000 UTC m=+1.277344031 container died 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:13:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8-merged.mount: Deactivated successfully.
Oct  1 10:13:40 np0005464214 podman[309176]: 2025-10-01 14:13:40.426245378 +0000 UTC m=+1.337943535 container remove 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:13:40 np0005464214 systemd[1]: libpod-conmon-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope: Deactivated successfully.
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.129723021 +0000 UTC m=+0.051364022 container create db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 10:13:41 np0005464214 systemd[1]: Started libpod-conmon-db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99.scope.
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.111794411 +0000 UTC m=+0.033435432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:13:41 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.236640714 +0000 UTC m=+0.158281755 container init db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.245783475 +0000 UTC m=+0.167424486 container start db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.248872393 +0000 UTC m=+0.170513434 container attach db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  1 10:13:41 np0005464214 musing_kirch[309395]: 167 167
Oct  1 10:13:41 np0005464214 systemd[1]: libpod-db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99.scope: Deactivated successfully.
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.250935728 +0000 UTC m=+0.172576769 container died db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:13:41 np0005464214 systemd[1]: var-lib-containers-storage-overlay-09d43343032ceb89a3e24d211dd4aaa15d8d5f4c0e14e9a2a8feec812f93950b-merged.mount: Deactivated successfully.
Oct  1 10:13:41 np0005464214 podman[309378]: 2025-10-01 14:13:41.302808736 +0000 UTC m=+0.224449777 container remove db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 10:13:41 np0005464214 systemd[1]: libpod-conmon-db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99.scope: Deactivated successfully.
Oct  1 10:13:41 np0005464214 podman[309419]: 2025-10-01 14:13:41.556204179 +0000 UTC m=+0.047093746 container create 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:13:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:41 np0005464214 systemd[1]: Started libpod-conmon-8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e.scope.
Oct  1 10:13:41 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:13:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:41 np0005464214 podman[309419]: 2025-10-01 14:13:41.537895679 +0000 UTC m=+0.028785296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:13:41 np0005464214 podman[309419]: 2025-10-01 14:13:41.726439984 +0000 UTC m=+0.217329591 container init 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:13:41 np0005464214 podman[309419]: 2025-10-01 14:13:41.732572948 +0000 UTC m=+0.223462525 container start 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:13:41 np0005464214 podman[309419]: 2025-10-01 14:13:41.753341818 +0000 UTC m=+0.244231415 container attach 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]: {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:    "0": [
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:        {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "devices": [
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "/dev/loop3"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            ],
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_name": "ceph_lv0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_size": "21470642176",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "name": "ceph_lv0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "tags": {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cluster_name": "ceph",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.crush_device_class": "",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.encrypted": "0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osd_id": "0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.type": "block",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.vdo": "0"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            },
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "type": "block",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "vg_name": "ceph_vg0"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:        }
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:    ],
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:    "1": [
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:        {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "devices": [
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "/dev/loop4"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            ],
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_name": "ceph_lv1",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_size": "21470642176",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "name": "ceph_lv1",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "tags": {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cluster_name": "ceph",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.crush_device_class": "",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.encrypted": "0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osd_id": "1",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.type": "block",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.vdo": "0"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            },
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "type": "block",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "vg_name": "ceph_vg1"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:        }
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:    ],
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:    "2": [
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:        {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "devices": [
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "/dev/loop5"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            ],
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_name": "ceph_lv2",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_size": "21470642176",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "name": "ceph_lv2",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "tags": {
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.cluster_name": "ceph",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.crush_device_class": "",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.encrypted": "0",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osd_id": "2",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.type": "block",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:                "ceph.vdo": "0"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            },
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "type": "block",
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:            "vg_name": "ceph_vg2"
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:        }
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]:    ]
Oct  1 10:13:42 np0005464214 elated_goldstine[309436]: }
Oct  1 10:13:42 np0005464214 systemd[1]: libpod-8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e.scope: Deactivated successfully.
Oct  1 10:13:42 np0005464214 podman[309419]: 2025-10-01 14:13:42.505874498 +0000 UTC m=+0.996764125 container died 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 10:13:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9-merged.mount: Deactivated successfully.
Oct  1 10:13:42 np0005464214 podman[309419]: 2025-10-01 14:13:42.574445885 +0000 UTC m=+1.065335462 container remove 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:13:42 np0005464214 systemd[1]: libpod-conmon-8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e.scope: Deactivated successfully.
Oct  1 10:13:42 np0005464214 podman[309452]: 2025-10-01 14:13:42.644529189 +0000 UTC m=+0.080272119 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:13:42 np0005464214 podman[309456]: 2025-10-01 14:13:42.644529159 +0000 UTC m=+0.066001816 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:13:42 np0005464214 podman[309462]: 2025-10-01 14:13:42.665317499 +0000 UTC m=+0.082706206 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 10:13:42 np0005464214 podman[309454]: 2025-10-01 14:13:42.665521356 +0000 UTC m=+0.095772532 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 10:13:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.208851324 +0000 UTC m=+0.060316166 container create 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:13:43 np0005464214 systemd[1]: Started libpod-conmon-43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59.scope.
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.182080024 +0000 UTC m=+0.033544926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:13:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.312616079 +0000 UTC m=+0.164080981 container init 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.31991222 +0000 UTC m=+0.171377062 container start 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.324043951 +0000 UTC m=+0.175508833 container attach 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  1 10:13:43 np0005464214 naughty_bardeen[309690]: 167 167
Oct  1 10:13:43 np0005464214 systemd[1]: libpod-43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59.scope: Deactivated successfully.
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.328275525 +0000 UTC m=+0.179740347 container died 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:13:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1fca902e5fb984f8d03fcefbbb705e7010e5dc90948dd7c9aa6697c41b5ea42c-merged.mount: Deactivated successfully.
Oct  1 10:13:43 np0005464214 podman[309673]: 2025-10-01 14:13:43.374795633 +0000 UTC m=+0.226260445 container remove 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:13:43 np0005464214 systemd[1]: libpod-conmon-43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59.scope: Deactivated successfully.
Oct  1 10:13:43 np0005464214 podman[309714]: 2025-10-01 14:13:43.551221154 +0000 UTC m=+0.053811150 container create 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  1 10:13:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:43 np0005464214 systemd[1]: Started libpod-conmon-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope.
Oct  1 10:13:43 np0005464214 podman[309714]: 2025-10-01 14:13:43.52340017 +0000 UTC m=+0.025990226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:13:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:13:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:13:43 np0005464214 podman[309714]: 2025-10-01 14:13:43.65601367 +0000 UTC m=+0.158603716 container init 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:13:43 np0005464214 podman[309714]: 2025-10-01 14:13:43.662522197 +0000 UTC m=+0.165112153 container start 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:13:43 np0005464214 podman[309714]: 2025-10-01 14:13:43.665953376 +0000 UTC m=+0.168543372 container attach 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]: {
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "osd_id": 0,
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "type": "bluestore"
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:    },
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "osd_id": 2,
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "type": "bluestore"
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:    },
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "osd_id": 1,
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:        "type": "bluestore"
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]:    }
Oct  1 10:13:44 np0005464214 hungry_dewdney[309731]: }
Oct  1 10:13:44 np0005464214 systemd[1]: libpod-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope: Deactivated successfully.
Oct  1 10:13:44 np0005464214 systemd[1]: libpod-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope: Consumed 1.076s CPU time.
Oct  1 10:13:44 np0005464214 podman[309714]: 2025-10-01 14:13:44.727812025 +0000 UTC m=+1.230402081 container died 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:13:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b-merged.mount: Deactivated successfully.
Oct  1 10:13:44 np0005464214 podman[309714]: 2025-10-01 14:13:44.797707574 +0000 UTC m=+1.300297570 container remove 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  1 10:13:44 np0005464214 systemd[1]: libpod-conmon-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope: Deactivated successfully.
Oct  1 10:13:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:13:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:44 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:13:44 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:44 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 889363bc-5063-438f-bd19-3c8aa4857f15 does not exist
Oct  1 10:13:44 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev b0e0612a-7def-494a-80b7-e0942d2e4bb4 does not exist
Oct  1 10:13:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:45 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:45 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:13:47
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'images', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Oct  1 10:13:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:13:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:13:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:13:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:13:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3477478826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:13:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:13:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3477478826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:13:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:13:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:13:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:13:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:14:12.339 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:14:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:14:12.340 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:14:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:14:12.340 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:14:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:13 np0005464214 podman[309829]: 2025-10-01 14:14:13.512463459 +0000 UTC m=+0.059512570 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:14:13 np0005464214 podman[309827]: 2025-10-01 14:14:13.530445189 +0000 UTC m=+0.081453507 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:14:13 np0005464214 podman[309828]: 2025-10-01 14:14:13.530547633 +0000 UTC m=+0.077539823 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923)
Oct  1 10:14:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:13 np0005464214 podman[309826]: 2025-10-01 14:14:13.632094067 +0000 UTC m=+0.177977842 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 10:14:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:14:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:14:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:22 np0005464214 nova_compute[260022]: 2025-10-01 14:14:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.381 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.381 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.381 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.382 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:14:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:14:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812019893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:14:31 np0005464214 nova_compute[260022]: 2025-10-01 14:14:31.902 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.096 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.098 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5028MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.098 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.099 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.187 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.202 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.203 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.203 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.260 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:14:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:14:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2851974264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.673 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.680 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.695 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.696 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:14:32 np0005464214 nova_compute[260022]: 2025-10-01 14:14:32.696 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:14:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:34 np0005464214 nova_compute[260022]: 2025-10-01 14:14:34.698 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:34 np0005464214 nova_compute[260022]: 2025-10-01 14:14:34.699 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:34 np0005464214 nova_compute[260022]: 2025-10-01 14:14:34.699 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:14:35 np0005464214 nova_compute[260022]: 2025-10-01 14:14:35.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:36 np0005464214 nova_compute[260022]: 2025-10-01 14:14:36.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:40 np0005464214 nova_compute[260022]: 2025-10-01 14:14:40.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:40 np0005464214 nova_compute[260022]: 2025-10-01 14:14:40.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:14:40 np0005464214 nova_compute[260022]: 2025-10-01 14:14:40.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:14:40 np0005464214 nova_compute[260022]: 2025-10-01 14:14:40.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:14:40 np0005464214 nova_compute[260022]: 2025-10-01 14:14:40.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:42 np0005464214 nova_compute[260022]: 2025-10-01 14:14:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:14:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:44 np0005464214 podman[309951]: 2025-10-01 14:14:44.544257321 +0000 UTC m=+0.098626702 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  1 10:14:44 np0005464214 podman[309952]: 2025-10-01 14:14:44.550774418 +0000 UTC m=+0.089788591 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 10:14:44 np0005464214 podman[309957]: 2025-10-01 14:14:44.550741507 +0000 UTC m=+0.079666730 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:14:44 np0005464214 podman[309959]: 2025-10-01 14:14:44.581596507 +0000 UTC m=+0.104473858 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  1 10:14:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:45 np0005464214 podman[310207]: 2025-10-01 14:14:45.977682597 +0000 UTC m=+0.060110259 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:14:46 np0005464214 podman[310207]: 2025-10-01 14:14:46.089553718 +0000 UTC m=+0.171981390 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:14:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:14:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:14:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 52c76f9a-3871-40d9-a7f8-1bdd974c7932 does not exist
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 0fe235ba-dd94-4041-872f-97c1d70d21e2 does not exist
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 71e949d7-ecf6-452d-8e46-e069eb593ad2 does not exist
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:14:47
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct  1 10:14:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.021356) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088021398, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 256, "total_data_size": 2688392, "memory_usage": 2735312, "flush_reason": "Manual Compaction"}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088039421, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 2629505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42059, "largest_seqno": 43738, "table_properties": {"data_size": 2621673, "index_size": 4711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15871, "raw_average_key_size": 19, "raw_value_size": 2605966, "raw_average_value_size": 3257, "num_data_blocks": 210, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327917, "oldest_key_time": 1759327917, "file_creation_time": 1759328088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 18132 microseconds, and 7125 cpu microseconds.
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.039485) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 2629505 bytes OK
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.039509) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.041270) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.041289) EVENT_LOG_v1 {"time_micros": 1759328088041282, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.041310) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2681128, prev total WAL file size 2681128, number of live WAL files 2.
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.042665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353131' seq:72057594037927935, type:22 .. '6C6F676D0031373632' seq:0, type:0; will stop at (end)
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(2567KB)], [98(8180KB)]
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088042768, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11006215, "oldest_snapshot_seqno": -1}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6154 keys, 10902608 bytes, temperature: kUnknown
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088096917, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10902608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10858792, "index_size": 27322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 158838, "raw_average_key_size": 25, "raw_value_size": 10744680, "raw_average_value_size": 1745, "num_data_blocks": 1097, "num_entries": 6154, "num_filter_entries": 6154, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.097183) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10902608 bytes
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.098467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 201.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 6682, records dropped: 528 output_compression: NoCompression
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.098486) EVENT_LOG_v1 {"time_micros": 1759328088098477, "job": 58, "event": "compaction_finished", "compaction_time_micros": 54247, "compaction_time_cpu_micros": 28357, "output_level": 6, "num_output_files": 1, "total_output_size": 10902608, "num_input_records": 6682, "num_output_records": 6154, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088099284, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088101037, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.042476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:14:48 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:14:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.495372943 +0000 UTC m=+0.058088816 container create 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:14:48 np0005464214 systemd[1]: Started libpod-conmon-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope.
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.463555383 +0000 UTC m=+0.026271336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:14:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.603905898 +0000 UTC m=+0.166621811 container init 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.61307169 +0000 UTC m=+0.175787573 container start 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.616021053 +0000 UTC m=+0.178736936 container attach 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 10:14:48 np0005464214 agitated_khayyam[310652]: 167 167
Oct  1 10:14:48 np0005464214 systemd[1]: libpod-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope: Deactivated successfully.
Oct  1 10:14:48 np0005464214 conmon[310652]: conmon 407e8ffc50cbb689e47a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope/container/memory.events
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.621383193 +0000 UTC m=+0.184099056 container died 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 10:14:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a4e3eaad66725632a2821b857d6f3b48177ccf703a1495feef549d171f27e68a-merged.mount: Deactivated successfully.
Oct  1 10:14:48 np0005464214 podman[310636]: 2025-10-01 14:14:48.677191945 +0000 UTC m=+0.239907858 container remove 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  1 10:14:48 np0005464214 systemd[1]: libpod-conmon-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope: Deactivated successfully.
Oct  1 10:14:48 np0005464214 podman[310677]: 2025-10-01 14:14:48.852443248 +0000 UTC m=+0.046043872 container create b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 10:14:48 np0005464214 systemd[1]: Started libpod-conmon-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope.
Oct  1 10:14:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:14:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:48 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:48 np0005464214 podman[310677]: 2025-10-01 14:14:48.834084196 +0000 UTC m=+0.027684840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:14:48 np0005464214 podman[310677]: 2025-10-01 14:14:48.940586377 +0000 UTC m=+0.134187061 container init b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 10:14:48 np0005464214 podman[310677]: 2025-10-01 14:14:48.949943993 +0000 UTC m=+0.143544637 container start b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  1 10:14:48 np0005464214 podman[310677]: 2025-10-01 14:14:48.953685732 +0000 UTC m=+0.147286386 container attach b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 10:14:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:50 np0005464214 sad_faraday[310693]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:14:50 np0005464214 sad_faraday[310693]: --> relative data size: 1.0
Oct  1 10:14:50 np0005464214 sad_faraday[310693]: --> All data devices are unavailable
Oct  1 10:14:50 np0005464214 systemd[1]: libpod-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope: Deactivated successfully.
Oct  1 10:14:50 np0005464214 systemd[1]: libpod-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope: Consumed 1.190s CPU time.
Oct  1 10:14:50 np0005464214 podman[310723]: 2025-10-01 14:14:50.24355378 +0000 UTC m=+0.037788930 container died b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:14:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb-merged.mount: Deactivated successfully.
Oct  1 10:14:50 np0005464214 podman[310723]: 2025-10-01 14:14:50.424304219 +0000 UTC m=+0.218539339 container remove b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:14:50 np0005464214 systemd[1]: libpod-conmon-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope: Deactivated successfully.
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.159330992 +0000 UTC m=+0.069005531 container create 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:14:51 np0005464214 systemd[1]: Started libpod-conmon-483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086.scope.
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.129120203 +0000 UTC m=+0.038794782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:14:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.262212218 +0000 UTC m=+0.171886737 container init 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.274906281 +0000 UTC m=+0.184580780 container start 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.278421913 +0000 UTC m=+0.188096462 container attach 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:14:51 np0005464214 focused_sinoussi[310893]: 167 167
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.284064532 +0000 UTC m=+0.193739031 container died 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  1 10:14:51 np0005464214 systemd[1]: libpod-483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086.scope: Deactivated successfully.
Oct  1 10:14:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e817bd3c98806629bb6db508293d6bef4e53b1858dbe11ed0866ebb1df3e8dcd-merged.mount: Deactivated successfully.
Oct  1 10:14:51 np0005464214 podman[310877]: 2025-10-01 14:14:51.325663613 +0000 UTC m=+0.235338122 container remove 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:14:51 np0005464214 systemd[1]: libpod-conmon-483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086.scope: Deactivated successfully.
Oct  1 10:14:51 np0005464214 podman[310916]: 2025-10-01 14:14:51.491232959 +0000 UTC m=+0.043933716 container create ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 10:14:51 np0005464214 systemd[1]: Started libpod-conmon-ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49.scope.
Oct  1 10:14:51 np0005464214 podman[310916]: 2025-10-01 14:14:51.470312665 +0000 UTC m=+0.023013412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:14:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:14:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:51 np0005464214 podman[310916]: 2025-10-01 14:14:51.591352797 +0000 UTC m=+0.144053554 container init ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 10:14:51 np0005464214 podman[310916]: 2025-10-01 14:14:51.598544135 +0000 UTC m=+0.151244862 container start ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 10:14:51 np0005464214 podman[310916]: 2025-10-01 14:14:51.601433787 +0000 UTC m=+0.154134514 container attach ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:14:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:52 np0005464214 competent_albattani[310933]: {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:    "0": [
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:        {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "devices": [
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "/dev/loop3"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            ],
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_name": "ceph_lv0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_size": "21470642176",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "name": "ceph_lv0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "tags": {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cluster_name": "ceph",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.crush_device_class": "",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.encrypted": "0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osd_id": "0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.type": "block",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.vdo": "0"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            },
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "type": "block",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "vg_name": "ceph_vg0"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:        }
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:    ],
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:    "1": [
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:        {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "devices": [
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "/dev/loop4"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            ],
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_name": "ceph_lv1",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_size": "21470642176",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "name": "ceph_lv1",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "tags": {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cluster_name": "ceph",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.crush_device_class": "",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.encrypted": "0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osd_id": "1",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.type": "block",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.vdo": "0"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            },
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "type": "block",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "vg_name": "ceph_vg1"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:        }
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:    ],
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:    "2": [
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:        {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "devices": [
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "/dev/loop5"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            ],
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_name": "ceph_lv2",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_size": "21470642176",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "name": "ceph_lv2",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "tags": {
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.cluster_name": "ceph",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.crush_device_class": "",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.encrypted": "0",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osd_id": "2",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.type": "block",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:                "ceph.vdo": "0"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            },
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "type": "block",
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:            "vg_name": "ceph_vg2"
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:        }
Oct  1 10:14:52 np0005464214 competent_albattani[310933]:    ]
Oct  1 10:14:52 np0005464214 competent_albattani[310933]: }
Oct  1 10:14:52 np0005464214 systemd[1]: libpod-ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49.scope: Deactivated successfully.
Oct  1 10:14:52 np0005464214 podman[310916]: 2025-10-01 14:14:52.389058471 +0000 UTC m=+0.941759198 container died ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 10:14:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853-merged.mount: Deactivated successfully.
Oct  1 10:14:52 np0005464214 podman[310916]: 2025-10-01 14:14:52.451722301 +0000 UTC m=+1.004423028 container remove ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 10:14:52 np0005464214 systemd[1]: libpod-conmon-ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49.scope: Deactivated successfully.
Oct  1 10:14:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.204003082 +0000 UTC m=+0.084085830 container create 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:14:53 np0005464214 systemd[1]: Started libpod-conmon-481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987.scope.
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.161970188 +0000 UTC m=+0.042053006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:14:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.286517121 +0000 UTC m=+0.166599899 container init 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.296814759 +0000 UTC m=+0.176897487 container start 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:14:53 np0005464214 youthful_tharp[311109]: 167 167
Oct  1 10:14:53 np0005464214 systemd[1]: libpod-481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987.scope: Deactivated successfully.
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.302496529 +0000 UTC m=+0.182579307 container attach 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.303042406 +0000 UTC m=+0.183125144 container died 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:14:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-33459e658b7ccc075fe6a8ce562817f0fa40c63dc50e26e257d21ecf3153d52b-merged.mount: Deactivated successfully.
Oct  1 10:14:53 np0005464214 podman[311092]: 2025-10-01 14:14:53.342584891 +0000 UTC m=+0.222667659 container remove 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  1 10:14:53 np0005464214 systemd[1]: libpod-conmon-481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987.scope: Deactivated successfully.
Oct  1 10:14:53 np0005464214 podman[311133]: 2025-10-01 14:14:53.518851607 +0000 UTC m=+0.043538553 container create a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:14:53 np0005464214 systemd[1]: Started libpod-conmon-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope.
Oct  1 10:14:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:14:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:53 np0005464214 podman[311133]: 2025-10-01 14:14:53.49939906 +0000 UTC m=+0.024086046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:14:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:14:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:53 np0005464214 podman[311133]: 2025-10-01 14:14:53.612534272 +0000 UTC m=+0.137221308 container init a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 10:14:53 np0005464214 podman[311133]: 2025-10-01 14:14:53.626924658 +0000 UTC m=+0.151611644 container start a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:14:53 np0005464214 podman[311133]: 2025-10-01 14:14:53.631202284 +0000 UTC m=+0.155889260 container attach a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]: {
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "osd_id": 0,
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "type": "bluestore"
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:    },
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "osd_id": 2,
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "type": "bluestore"
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:    },
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "osd_id": 1,
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:        "type": "bluestore"
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]:    }
Oct  1 10:14:54 np0005464214 quizzical_moser[311149]: }
Oct  1 10:14:54 np0005464214 systemd[1]: libpod-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope: Deactivated successfully.
Oct  1 10:14:54 np0005464214 podman[311133]: 2025-10-01 14:14:54.730932555 +0000 UTC m=+1.255619591 container died a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 10:14:54 np0005464214 systemd[1]: libpod-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope: Consumed 1.110s CPU time.
Oct  1 10:14:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a-merged.mount: Deactivated successfully.
Oct  1 10:14:54 np0005464214 podman[311133]: 2025-10-01 14:14:54.79689369 +0000 UTC m=+1.321580636 container remove a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:14:54 np0005464214 systemd[1]: libpod-conmon-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope: Deactivated successfully.
Oct  1 10:14:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:14:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:14:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:54 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9782c2e8-8668-4629-bcf0-4b13370fea0f does not exist
Oct  1 10:14:54 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev eed95999-88df-4c66-a616-78fb5d839c26 does not exist
Oct  1 10:14:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:14:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741173761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:14:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:14:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741173761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:14:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:55 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:14:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:14:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:14:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:05 np0005464214 nova_compute[260022]: 2025-10-01 14:15:05.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:15:12.340 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:15:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:15:12.341 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:15:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:15:12.341 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:15:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:15 np0005464214 podman[311248]: 2025-10-01 14:15:15.554603991 +0000 UTC m=+0.075700245 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 10:15:15 np0005464214 podman[311246]: 2025-10-01 14:15:15.55520014 +0000 UTC m=+0.087669075 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  1 10:15:15 np0005464214 podman[311247]: 2025-10-01 14:15:15.555632694 +0000 UTC m=+0.090224046 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 10:15:15 np0005464214 podman[311245]: 2025-10-01 14:15:15.589708696 +0000 UTC m=+0.123838943 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 10:15:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:15:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:15:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:21 np0005464214 nova_compute[260022]: 2025-10-01 14:15:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:24 np0005464214 nova_compute[260022]: 2025-10-01 14:15:24.374 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:24 np0005464214 systemd-logind[818]: New session 53 of user zuul.
Oct  1 10:15:24 np0005464214 systemd[1]: Started Session 53 of User zuul.
Oct  1 10:15:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:26 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:15:26.608 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:15:26 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:15:26.611 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:15:26 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:15:26.615 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:15:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:30 np0005464214 systemd[1]: session-53.scope: Deactivated successfully.
Oct  1 10:15:30 np0005464214 systemd-logind[818]: Session 53 logged out. Waiting for processes to exit.
Oct  1 10:15:30 np0005464214 systemd-logind[818]: Removed session 53.
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.375 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:15:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:15:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713596258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:15:31 np0005464214 nova_compute[260022]: 2025-10-01 14:15:31.828 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.021 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.022 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.023 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.023 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.098 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.112 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.112 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.112 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.241 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:15:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:15:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306180590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.707 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.714 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.734 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.735 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:15:32 np0005464214 nova_compute[260022]: 2025-10-01 14:15:32.735 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:15:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:36 np0005464214 nova_compute[260022]: 2025-10-01 14:15:36.731 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:36 np0005464214 nova_compute[260022]: 2025-10-01 14:15:36.731 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:36 np0005464214 nova_compute[260022]: 2025-10-01 14:15:36.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:36 np0005464214 nova_compute[260022]: 2025-10-01 14:15:36.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:36 np0005464214 nova_compute[260022]: 2025-10-01 14:15:36.732 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:15:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.700766) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139700805, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 639, "num_deletes": 251, "total_data_size": 783368, "memory_usage": 796072, "flush_reason": "Manual Compaction"}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139708418, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 776533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43739, "largest_seqno": 44377, "table_properties": {"data_size": 773067, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7701, "raw_average_key_size": 19, "raw_value_size": 766242, "raw_average_value_size": 1910, "num_data_blocks": 61, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328089, "oldest_key_time": 1759328089, "file_creation_time": 1759328139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 7698 microseconds, and 4452 cpu microseconds.
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.708464) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 776533 bytes OK
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.708485) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.712856) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.712875) EVENT_LOG_v1 {"time_micros": 1759328139712869, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.712893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 779964, prev total WAL file size 781121, number of live WAL files 2.
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.713527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(758KB)], [101(10MB)]
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139713579, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11679141, "oldest_snapshot_seqno": -1}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6043 keys, 9920838 bytes, temperature: kUnknown
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139788318, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9920838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9878681, "index_size": 25919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 157187, "raw_average_key_size": 26, "raw_value_size": 9767411, "raw_average_value_size": 1616, "num_data_blocks": 1031, "num_entries": 6043, "num_filter_entries": 6043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.788837) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9920838 bytes
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.790766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.0 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.4 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(27.8) write-amplify(12.8) OK, records in: 6555, records dropped: 512 output_compression: NoCompression
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.790814) EVENT_LOG_v1 {"time_micros": 1759328139790794, "job": 60, "event": "compaction_finished", "compaction_time_micros": 74882, "compaction_time_cpu_micros": 40617, "output_level": 6, "num_output_files": 1, "total_output_size": 9920838, "num_input_records": 6555, "num_output_records": 6043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139791360, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139795201, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.713474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:15:39 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:15:41 np0005464214 nova_compute[260022]: 2025-10-01 14:15:41.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:41 np0005464214 nova_compute[260022]: 2025-10-01 14:15:41.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:15:41 np0005464214 nova_compute[260022]: 2025-10-01 14:15:41.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:15:41 np0005464214 nova_compute[260022]: 2025-10-01 14:15:41.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:15:41 np0005464214 nova_compute[260022]: 2025-10-01 14:15:41.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:42 np0005464214 nova_compute[260022]: 2025-10-01 14:15:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:46 np0005464214 podman[311631]: 2025-10-01 14:15:46.55140749 +0000 UTC m=+0.085622258 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  1 10:15:46 np0005464214 podman[311629]: 2025-10-01 14:15:46.551590856 +0000 UTC m=+0.096021719 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:15:46 np0005464214 podman[311630]: 2025-10-01 14:15:46.567509252 +0000 UTC m=+0.104199459 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  1 10:15:46 np0005464214 podman[311628]: 2025-10-01 14:15:46.603565647 +0000 UTC m=+0.150622173 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:15:47
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'default.rgw.log']
Oct  1 10:15:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:15:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:15:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:15:48 np0005464214 nova_compute[260022]: 2025-10-01 14:15:48.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:48 np0005464214 nova_compute[260022]: 2025-10-01 14:15:48.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 10:15:48 np0005464214 nova_compute[260022]: 2025-10-01 14:15:48.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 10:15:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2065959952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2065959952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:15:55 np0005464214 nova_compute[260022]: 2025-10-01 14:15:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:15:55 np0005464214 nova_compute[260022]: 2025-10-01 14:15:55.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 10:15:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:15:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 84d9faf8-d48e-4921-8fbc-3849b055031a does not exist
Oct  1 10:15:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c6d174fc-324b-425d-bb4e-eef9d32ae2a1 does not exist
Oct  1 10:15:55 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 29759e6d-c2d1-4b42-916a-3edea1626f73 does not exist
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:15:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:15:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:15:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:15:56 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.735529766 +0000 UTC m=+0.065302724 container create 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 10:15:56 np0005464214 systemd[1]: Started libpod-conmon-445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7.scope.
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.700439032 +0000 UTC m=+0.030212080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:15:56 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.83897041 +0000 UTC m=+0.168743448 container init 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.850540566 +0000 UTC m=+0.180313535 container start 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.854504193 +0000 UTC m=+0.184277161 container attach 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:15:56 np0005464214 elated_fermat[311994]: 167 167
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.856592899 +0000 UTC m=+0.186365867 container died 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 10:15:56 np0005464214 systemd[1]: libpod-445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7.scope: Deactivated successfully.
Oct  1 10:15:56 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1ea44bd2075d4453b53ec5efce2b92b09a22b60156ad3704a2e2443ee16cae42-merged.mount: Deactivated successfully.
Oct  1 10:15:56 np0005464214 podman[311978]: 2025-10-01 14:15:56.922021766 +0000 UTC m=+0.251794734 container remove 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:15:56 np0005464214 systemd[1]: libpod-conmon-445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7.scope: Deactivated successfully.
Oct  1 10:15:57 np0005464214 podman[312017]: 2025-10-01 14:15:57.135430441 +0000 UTC m=+0.072716039 container create cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 10:15:57 np0005464214 systemd[1]: Started libpod-conmon-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope.
Oct  1 10:15:57 np0005464214 podman[312017]: 2025-10-01 14:15:57.10481501 +0000 UTC m=+0.042100648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:15:57 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:15:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:57 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:57 np0005464214 podman[312017]: 2025-10-01 14:15:57.225799039 +0000 UTC m=+0.163084627 container init cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:15:57 np0005464214 podman[312017]: 2025-10-01 14:15:57.240101564 +0000 UTC m=+0.177387122 container start cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 10:15:57 np0005464214 podman[312017]: 2025-10-01 14:15:57.244633008 +0000 UTC m=+0.181918566 container attach cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:15:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:15:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:15:58 np0005464214 unruffled_lehmann[312034]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:15:58 np0005464214 unruffled_lehmann[312034]: --> relative data size: 1.0
Oct  1 10:15:58 np0005464214 unruffled_lehmann[312034]: --> All data devices are unavailable
Oct  1 10:15:58 np0005464214 systemd[1]: libpod-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope: Deactivated successfully.
Oct  1 10:15:58 np0005464214 podman[312017]: 2025-10-01 14:15:58.432064804 +0000 UTC m=+1.369350362 container died cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:15:58 np0005464214 systemd[1]: libpod-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope: Consumed 1.132s CPU time.
Oct  1 10:15:58 np0005464214 systemd[1]: var-lib-containers-storage-overlay-5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f-merged.mount: Deactivated successfully.
Oct  1 10:15:58 np0005464214 podman[312017]: 2025-10-01 14:15:58.492757691 +0000 UTC m=+1.430043269 container remove cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:15:58 np0005464214 systemd[1]: libpod-conmon-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope: Deactivated successfully.
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.215543286 +0000 UTC m=+0.051888098 container create 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:15:59 np0005464214 systemd[1]: Started libpod-conmon-4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646.scope.
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.192759383 +0000 UTC m=+0.029104355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:15:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.315132558 +0000 UTC m=+0.151477410 container init 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.327317005 +0000 UTC m=+0.163661827 container start 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.331223568 +0000 UTC m=+0.167568410 container attach 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:15:59 np0005464214 jovial_wescoff[312236]: 167 167
Oct  1 10:15:59 np0005464214 systemd[1]: libpod-4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646.scope: Deactivated successfully.
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.336707852 +0000 UTC m=+0.173052704 container died 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:15:59 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4e0cd50a5fe8ec3f1cd2074656bc5fa63b2a439b3bd527f62654780dfd5cbb21-merged.mount: Deactivated successfully.
Oct  1 10:15:59 np0005464214 podman[312220]: 2025-10-01 14:15:59.389633712 +0000 UTC m=+0.225978524 container remove 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:15:59 np0005464214 systemd[1]: libpod-conmon-4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646.scope: Deactivated successfully.
Oct  1 10:15:59 np0005464214 podman[312261]: 2025-10-01 14:15:59.564907627 +0000 UTC m=+0.047470188 container create 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 10:15:59 np0005464214 systemd[1]: Started libpod-conmon-0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795.scope.
Oct  1 10:15:59 np0005464214 podman[312261]: 2025-10-01 14:15:59.540667078 +0000 UTC m=+0.023229619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:15:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:15:59 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:15:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:59 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:15:59 np0005464214 podman[312261]: 2025-10-01 14:15:59.674059302 +0000 UTC m=+0.156621923 container init 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:15:59 np0005464214 podman[312261]: 2025-10-01 14:15:59.688288644 +0000 UTC m=+0.170851195 container start 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:15:59 np0005464214 podman[312261]: 2025-10-01 14:15:59.692932501 +0000 UTC m=+0.175495052 container attach 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]: {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:    "0": [
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:        {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "devices": [
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "/dev/loop3"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            ],
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_name": "ceph_lv0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_size": "21470642176",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "name": "ceph_lv0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "tags": {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cluster_name": "ceph",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.crush_device_class": "",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.encrypted": "0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osd_id": "0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.type": "block",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.vdo": "0"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            },
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "type": "block",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "vg_name": "ceph_vg0"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:        }
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:    ],
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:    "1": [
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:        {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "devices": [
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "/dev/loop4"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            ],
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_name": "ceph_lv1",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_size": "21470642176",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "name": "ceph_lv1",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "tags": {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cluster_name": "ceph",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.crush_device_class": "",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.encrypted": "0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osd_id": "1",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.type": "block",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.vdo": "0"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            },
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "type": "block",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "vg_name": "ceph_vg1"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:        }
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:    ],
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:    "2": [
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:        {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "devices": [
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "/dev/loop5"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            ],
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_name": "ceph_lv2",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_size": "21470642176",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "name": "ceph_lv2",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "tags": {
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.cluster_name": "ceph",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.crush_device_class": "",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.encrypted": "0",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osd_id": "2",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.type": "block",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:                "ceph.vdo": "0"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            },
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "type": "block",
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:            "vg_name": "ceph_vg2"
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:        }
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]:    ]
Oct  1 10:16:00 np0005464214 kind_mcnulty[312278]: }
Oct  1 10:16:00 np0005464214 systemd[1]: libpod-0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795.scope: Deactivated successfully.
Oct  1 10:16:00 np0005464214 podman[312261]: 2025-10-01 14:16:00.516233568 +0000 UTC m=+0.998796119 container died 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 10:16:00 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c-merged.mount: Deactivated successfully.
Oct  1 10:16:00 np0005464214 podman[312261]: 2025-10-01 14:16:00.584326059 +0000 UTC m=+1.066888590 container remove 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:16:00 np0005464214 systemd[1]: libpod-conmon-0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795.scope: Deactivated successfully.
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.367603315 +0000 UTC m=+0.055413770 container create 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  1 10:16:01 np0005464214 systemd[1]: Started libpod-conmon-91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321.scope.
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.341677882 +0000 UTC m=+0.029488397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:16:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.470694038 +0000 UTC m=+0.158504563 container init 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.482705469 +0000 UTC m=+0.170515934 container start 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.486957474 +0000 UTC m=+0.174767949 container attach 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  1 10:16:01 np0005464214 optimistic_bhabha[312455]: 167 167
Oct  1 10:16:01 np0005464214 systemd[1]: libpod-91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321.scope: Deactivated successfully.
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.491816849 +0000 UTC m=+0.179627314 container died 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 10:16:01 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e7f49d5144a7e355ff91a2210217a2a971986597c3f6f7381efd52e534e72d2f-merged.mount: Deactivated successfully.
Oct  1 10:16:01 np0005464214 podman[312438]: 2025-10-01 14:16:01.548179567 +0000 UTC m=+0.235990012 container remove 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  1 10:16:01 np0005464214 systemd[1]: libpod-conmon-91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321.scope: Deactivated successfully.
Oct  1 10:16:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:01 np0005464214 podman[312478]: 2025-10-01 14:16:01.758636318 +0000 UTC m=+0.060292974 container create 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:16:01 np0005464214 systemd[1]: Started libpod-conmon-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope.
Oct  1 10:16:01 np0005464214 podman[312478]: 2025-10-01 14:16:01.736581418 +0000 UTC m=+0.038238084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:16:01 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:16:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:16:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:16:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:16:01 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:16:01 np0005464214 podman[312478]: 2025-10-01 14:16:01.870660045 +0000 UTC m=+0.172316741 container init 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:16:01 np0005464214 podman[312478]: 2025-10-01 14:16:01.884969999 +0000 UTC m=+0.186626625 container start 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 10:16:01 np0005464214 podman[312478]: 2025-10-01 14:16:01.888809031 +0000 UTC m=+0.190465747 container attach 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:16:02 np0005464214 agitated_borg[312495]: {
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "osd_id": 0,
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "type": "bluestore"
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:    },
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "osd_id": 2,
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "type": "bluestore"
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:    },
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "osd_id": 1,
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:        "type": "bluestore"
Oct  1 10:16:02 np0005464214 agitated_borg[312495]:    }
Oct  1 10:16:02 np0005464214 agitated_borg[312495]: }
Oct  1 10:16:02 np0005464214 systemd[1]: libpod-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope: Deactivated successfully.
Oct  1 10:16:02 np0005464214 systemd[1]: libpod-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope: Consumed 1.112s CPU time.
Oct  1 10:16:02 np0005464214 podman[312478]: 2025-10-01 14:16:02.988013826 +0000 UTC m=+1.289670472 container died 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:16:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:03 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b-merged.mount: Deactivated successfully.
Oct  1 10:16:03 np0005464214 podman[312478]: 2025-10-01 14:16:03.134012791 +0000 UTC m=+1.435669417 container remove 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:16:03 np0005464214 systemd[1]: libpod-conmon-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope: Deactivated successfully.
Oct  1 10:16:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:16:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:16:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:16:03 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:16:03 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev acb8d47b-2691-4538-8f51-6ffd7466b62e does not exist
Oct  1 10:16:03 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 679255d6-ca09-495c-8b41-598276f9624d does not exist
Oct  1 10:16:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:16:04 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:16:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:16:12.342 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:16:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:16:12.344 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:16:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:16:12.344 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.046284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173046376, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 525, "num_deletes": 250, "total_data_size": 517825, "memory_usage": 527072, "flush_reason": "Manual Compaction"}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173052157, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 381410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44378, "largest_seqno": 44902, "table_properties": {"data_size": 378716, "index_size": 730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7330, "raw_average_key_size": 20, "raw_value_size": 373106, "raw_average_value_size": 1051, "num_data_blocks": 32, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328139, "oldest_key_time": 1759328139, "file_creation_time": 1759328173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 5911 microseconds, and 2081 cpu microseconds.
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.052211) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 381410 bytes OK
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.052227) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057017) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057032) EVENT_LOG_v1 {"time_micros": 1759328173057027, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 514812, prev total WAL file size 514812, number of live WAL files 2.
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(372KB)], [104(9688KB)]
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173057693, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 10302248, "oldest_snapshot_seqno": -1}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5896 keys, 7130680 bytes, temperature: kUnknown
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173140482, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 7130680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7093993, "index_size": 20833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 154324, "raw_average_key_size": 26, "raw_value_size": 6989699, "raw_average_value_size": 1185, "num_data_blocks": 821, "num_entries": 5896, "num_filter_entries": 5896, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.140769) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 7130680 bytes
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.150827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.3 rd, 86.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(45.7) write-amplify(18.7) OK, records in: 6398, records dropped: 502 output_compression: NoCompression
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.150854) EVENT_LOG_v1 {"time_micros": 1759328173150842, "job": 62, "event": "compaction_finished", "compaction_time_micros": 82862, "compaction_time_cpu_micros": 18117, "output_level": 6, "num_output_files": 1, "total_output_size": 7130680, "num_input_records": 6398, "num_output_records": 5896, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173151116, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173153795, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:16:13 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:16:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:17 np0005464214 podman[312595]: 2025-10-01 14:16:17.538843957 +0000 UTC m=+0.083360818 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  1 10:16:17 np0005464214 podman[312593]: 2025-10-01 14:16:17.551999694 +0000 UTC m=+0.094869993 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  1 10:16:17 np0005464214 podman[312594]: 2025-10-01 14:16:17.562119625 +0000 UTC m=+0.105415667 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  1 10:16:17 np0005464214 podman[312592]: 2025-10-01 14:16:17.594317818 +0000 UTC m=+0.137882659 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:16:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:16:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:25 np0005464214 nova_compute[260022]: 2025-10-01 14:16:25.369 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.387 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.388 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.388 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.389 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.389 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:16:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:16:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902463455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:16:31 np0005464214 nova_compute[260022]: 2025-10-01 14:16:31.854 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.031 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.032 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5013MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.033 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.033 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.222 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.240 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.241 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.241 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.264 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.412 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.412 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.430 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.462 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.512 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:16:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:16:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322114946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.944 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:16:32 np0005464214 nova_compute[260022]: 2025-10-01 14:16:32.952 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:16:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:33 np0005464214 nova_compute[260022]: 2025-10-01 14:16:33.087 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:16:33 np0005464214 nova_compute[260022]: 2025-10-01 14:16:33.090 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:16:33 np0005464214 nova_compute[260022]: 2025-10-01 14:16:33.091 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:16:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:37 np0005464214 nova_compute[260022]: 2025-10-01 14:16:37.088 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:37 np0005464214 nova_compute[260022]: 2025-10-01 14:16:37.088 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:37 np0005464214 nova_compute[260022]: 2025-10-01 14:16:37.089 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:16:37 np0005464214 nova_compute[260022]: 2025-10-01 14:16:37.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:37 np0005464214 nova_compute[260022]: 2025-10-01 14:16:37.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:39 np0005464214 systemd-logind[818]: New session 54 of user zuul.
Oct  1 10:16:39 np0005464214 systemd[1]: Started Session 54 of User zuul.
Oct  1 10:16:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:16:39.809 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  1 10:16:39 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:16:39.812 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  1 10:16:40 np0005464214 systemd[1]: Reloading.
Oct  1 10:16:40 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 10:16:40 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 10:16:40 np0005464214 systemd[1]: Reloading.
Oct  1 10:16:41 np0005464214 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  1 10:16:41 np0005464214 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  1 10:16:41 np0005464214 systemd[1]: Starting Podman API Socket...
Oct  1 10:16:41 np0005464214 systemd[1]: Listening on Podman API Socket.
Oct  1 10:16:41 np0005464214 nova_compute[260022]: 2025-10-01 14:16:41.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:41 np0005464214 nova_compute[260022]: 2025-10-01 14:16:41.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:16:41 np0005464214 nova_compute[260022]: 2025-10-01 14:16:41.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:16:41 np0005464214 nova_compute[260022]: 2025-10-01 14:16:41.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:16:41 np0005464214 dbus-broker-launch[786]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct  1 10:16:41 np0005464214 systemd[1]: podman.socket: Deactivated successfully.
Oct  1 10:16:41 np0005464214 systemd[1]: Closed Podman API Socket.
Oct  1 10:16:41 np0005464214 systemd[1]: Stopping Podman API Socket...
Oct  1 10:16:41 np0005464214 systemd[1]: Starting Podman API Socket...
Oct  1 10:16:41 np0005464214 systemd[1]: Listening on Podman API Socket.
Oct  1 10:16:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:41 np0005464214 systemd-logind[818]: New session 55 of user zuul.
Oct  1 10:16:41 np0005464214 systemd[1]: Started Session 55 of User zuul.
Oct  1 10:16:41 np0005464214 systemd[1]: Starting Podman API Service...
Oct  1 10:16:41 np0005464214 systemd[1]: Started Podman API Service.
Oct  1 10:16:41 np0005464214 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct  1 10:16:41 np0005464214 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Setting parallel job count to 25"
Oct  1 10:16:41 np0005464214 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Using sqlite as database backend"
Oct  1 10:16:41 np0005464214 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct  1 10:16:41 np0005464214 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct  1 10:16:41 np0005464214 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct  1 10:16:41 np0005464214 podman[312950]: @ - - [01/Oct/2025:14:16:41 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct  1 10:16:41 np0005464214 podman[312950]: @ - - [01/Oct/2025:14:16:41 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 27464 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct  1 10:16:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:43 np0005464214 nova_compute[260022]: 2025-10-01 14:16:43.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:44 np0005464214 nova_compute[260022]: 2025-10-01 14:16:44.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:16:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:45 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:16:45.814 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:16:47
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'volumes']
Oct  1 10:16:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:16:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:16:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:16:48 np0005464214 podman[312988]: 2025-10-01 14:16:48.516158254 +0000 UTC m=+0.068472306 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:16:48 np0005464214 podman[312987]: 2025-10-01 14:16:48.517570098 +0000 UTC m=+0.073598528 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:16:48 np0005464214 podman[312989]: 2025-10-01 14:16:48.51919261 +0000 UTC m=+0.070962165 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 10:16:48 np0005464214 podman[312986]: 2025-10-01 14:16:48.564570461 +0000 UTC m=+0.117790312 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  1 10:16:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:16:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616910395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:16:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:16:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616910395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:16:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:56 np0005464214 podman[312950]: time="2025-10-01T14:16:56Z" level=info msg="Received shutdown.Stop(), terminating!" PID=312950
Oct  1 10:16:56 np0005464214 systemd[1]: podman.service: Deactivated successfully.
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:16:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:16:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:16:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:17:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 91b34b8b-3ba1-4f4c-96c3-aed2458a759e does not exist
Oct  1 10:17:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f73ee4b9-baf7-4989-a4ec-907a0d0b152c does not exist
Oct  1 10:17:04 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 4bd58f0a-af9a-44d8-b2c8-369f26c2545b does not exist
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:17:04 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.083450362 +0000 UTC m=+0.066067950 container create 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 10:17:05 np0005464214 systemd[1]: Started libpod-conmon-7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab.scope.
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.054486512 +0000 UTC m=+0.037104140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:17:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:17:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:17:05 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:17:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.215565079 +0000 UTC m=+0.198182657 container init 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.227569019 +0000 UTC m=+0.210186597 container start 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.231578287 +0000 UTC m=+0.214195865 container attach 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:17:05 np0005464214 gifted_driscoll[313381]: 167 167
Oct  1 10:17:05 np0005464214 systemd[1]: libpod-7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab.scope: Deactivated successfully.
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.237554267 +0000 UTC m=+0.220171845 container died 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:17:05 np0005464214 systemd[1]: var-lib-containers-storage-overlay-20750985d6fade45db7795d25305daf5f7173803a432d577d3cd183c28bd86a2-merged.mount: Deactivated successfully.
Oct  1 10:17:05 np0005464214 podman[313340]: 2025-10-01 14:17:05.284643322 +0000 UTC m=+0.267260900 container remove 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:17:05 np0005464214 systemd[1]: libpod-conmon-7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab.scope: Deactivated successfully.
Oct  1 10:17:05 np0005464214 podman[313432]: 2025-10-01 14:17:05.505882839 +0000 UTC m=+0.056180165 container create 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:17:05 np0005464214 systemd[1]: Started libpod-conmon-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope.
Oct  1 10:17:05 np0005464214 podman[313432]: 2025-10-01 14:17:05.484635385 +0000 UTC m=+0.034932711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:17:05 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:17:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:05 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:05 np0005464214 podman[313432]: 2025-10-01 14:17:05.609972035 +0000 UTC m=+0.160269361 container init 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 10:17:05 np0005464214 podman[313432]: 2025-10-01 14:17:05.623847506 +0000 UTC m=+0.174144802 container start 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:17:05 np0005464214 podman[313432]: 2025-10-01 14:17:05.627277755 +0000 UTC m=+0.177575051 container attach 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:17:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:06 np0005464214 inspiring_leakey[313448]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:17:06 np0005464214 inspiring_leakey[313448]: --> relative data size: 1.0
Oct  1 10:17:06 np0005464214 inspiring_leakey[313448]: --> All data devices are unavailable
Oct  1 10:17:06 np0005464214 systemd[1]: libpod-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope: Deactivated successfully.
Oct  1 10:17:06 np0005464214 systemd[1]: libpod-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope: Consumed 1.148s CPU time.
Oct  1 10:17:06 np0005464214 podman[313477]: 2025-10-01 14:17:06.884195535 +0000 UTC m=+0.044825935 container died 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:17:06 np0005464214 systemd[1]: var-lib-containers-storage-overlay-1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d-merged.mount: Deactivated successfully.
Oct  1 10:17:06 np0005464214 podman[313477]: 2025-10-01 14:17:06.949123557 +0000 UTC m=+0.109753957 container remove 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  1 10:17:06 np0005464214 systemd[1]: libpod-conmon-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope: Deactivated successfully.
Oct  1 10:17:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:07 np0005464214 podman[313635]: 2025-10-01 14:17:07.843356588 +0000 UTC m=+0.071840922 container create 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:17:07 np0005464214 systemd[1]: Started libpod-conmon-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope.
Oct  1 10:17:07 np0005464214 podman[313635]: 2025-10-01 14:17:07.812976044 +0000 UTC m=+0.041460448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:17:07 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:17:07 np0005464214 podman[313635]: 2025-10-01 14:17:07.944461049 +0000 UTC m=+0.172945433 container init 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 10:17:07 np0005464214 podman[313635]: 2025-10-01 14:17:07.956657777 +0000 UTC m=+0.185142121 container start 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:17:07 np0005464214 podman[313635]: 2025-10-01 14:17:07.960673054 +0000 UTC m=+0.189157438 container attach 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:17:07 np0005464214 nervous_bose[313652]: 167 167
Oct  1 10:17:07 np0005464214 systemd[1]: libpod-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope: Deactivated successfully.
Oct  1 10:17:07 np0005464214 conmon[313652]: conmon 4aa24c51bf8558f471a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope/container/memory.events
Oct  1 10:17:07 np0005464214 podman[313635]: 2025-10-01 14:17:07.967196791 +0000 UTC m=+0.195681125 container died 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:17:08 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6cc6b2d8c4d0b0506536f12842e94e6625432de13fee071f4f31d7bc825800b5-merged.mount: Deactivated successfully.
Oct  1 10:17:08 np0005464214 podman[313635]: 2025-10-01 14:17:08.013534073 +0000 UTC m=+0.242018377 container remove 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:17:08 np0005464214 systemd[1]: libpod-conmon-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope: Deactivated successfully.
Oct  1 10:17:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:08 np0005464214 podman[313676]: 2025-10-01 14:17:08.25555053 +0000 UTC m=+0.059414779 container create 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:17:08 np0005464214 systemd[1]: Started libpod-conmon-6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1.scope.
Oct  1 10:17:08 np0005464214 podman[313676]: 2025-10-01 14:17:08.227035944 +0000 UTC m=+0.030900293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:17:08 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:17:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:08 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:08 np0005464214 podman[313676]: 2025-10-01 14:17:08.379959491 +0000 UTC m=+0.183823850 container init 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 10:17:08 np0005464214 podman[313676]: 2025-10-01 14:17:08.399949796 +0000 UTC m=+0.203814055 container start 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  1 10:17:08 np0005464214 podman[313676]: 2025-10-01 14:17:08.404661775 +0000 UTC m=+0.208526064 container attach 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:17:08 np0005464214 systemd[1]: session-54.scope: Deactivated successfully.
Oct  1 10:17:08 np0005464214 systemd[1]: session-54.scope: Consumed 1.507s CPU time.
Oct  1 10:17:08 np0005464214 systemd-logind[818]: Session 54 logged out. Waiting for processes to exit.
Oct  1 10:17:08 np0005464214 systemd-logind[818]: Removed session 54.
Oct  1 10:17:09 np0005464214 stoic_colden[313692]: {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:    "0": [
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:        {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "devices": [
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "/dev/loop3"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            ],
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_name": "ceph_lv0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_size": "21470642176",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "name": "ceph_lv0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "tags": {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cluster_name": "ceph",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.crush_device_class": "",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.encrypted": "0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osd_id": "0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.type": "block",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.vdo": "0"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            },
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "type": "block",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "vg_name": "ceph_vg0"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:        }
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:    ],
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:    "1": [
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:        {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "devices": [
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "/dev/loop4"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            ],
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_name": "ceph_lv1",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_size": "21470642176",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "name": "ceph_lv1",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "tags": {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cluster_name": "ceph",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.crush_device_class": "",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.encrypted": "0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osd_id": "1",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.type": "block",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.vdo": "0"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            },
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "type": "block",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "vg_name": "ceph_vg1"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:        }
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:    ],
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:    "2": [
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:        {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "devices": [
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "/dev/loop5"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            ],
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_name": "ceph_lv2",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_size": "21470642176",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "name": "ceph_lv2",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "tags": {
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.cluster_name": "ceph",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.crush_device_class": "",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.encrypted": "0",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osd_id": "2",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.type": "block",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:                "ceph.vdo": "0"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            },
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "type": "block",
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:            "vg_name": "ceph_vg2"
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:        }
Oct  1 10:17:09 np0005464214 stoic_colden[313692]:    ]
Oct  1 10:17:09 np0005464214 stoic_colden[313692]: }
Oct  1 10:17:09 np0005464214 systemd[1]: libpod-6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1.scope: Deactivated successfully.
Oct  1 10:17:09 np0005464214 podman[313676]: 2025-10-01 14:17:09.261002304 +0000 UTC m=+1.064866593 container died 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:17:09 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63-merged.mount: Deactivated successfully.
Oct  1 10:17:09 np0005464214 podman[313676]: 2025-10-01 14:17:09.332211195 +0000 UTC m=+1.136075444 container remove 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 10:17:09 np0005464214 systemd[1]: libpod-conmon-6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1.scope: Deactivated successfully.
Oct  1 10:17:09 np0005464214 nova_compute[260022]: 2025-10-01 14:17:09.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:10 np0005464214 systemd[1]: session-55.scope: Deactivated successfully.
Oct  1 10:17:10 np0005464214 systemd-logind[818]: Session 55 logged out. Waiting for processes to exit.
Oct  1 10:17:10 np0005464214 systemd-logind[818]: Removed session 55.
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.115462131 +0000 UTC m=+0.047301853 container create bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 10:17:10 np0005464214 systemd[1]: Started libpod-conmon-bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904.scope.
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.095110225 +0000 UTC m=+0.026949937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:17:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.235528325 +0000 UTC m=+0.167368107 container init bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.24355403 +0000 UTC m=+0.175393732 container start bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.247306198 +0000 UTC m=+0.179145880 container attach bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 10:17:10 np0005464214 nostalgic_merkle[313870]: 167 167
Oct  1 10:17:10 np0005464214 systemd[1]: libpod-bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904.scope: Deactivated successfully.
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.249622212 +0000 UTC m=+0.181461944 container died bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:17:10 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0cfa301def8cfc6c94f0df3878ffbefc1ccd0c63a5bf185a035249ff0a8d076d-merged.mount: Deactivated successfully.
Oct  1 10:17:10 np0005464214 podman[313854]: 2025-10-01 14:17:10.299660622 +0000 UTC m=+0.231500344 container remove bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:17:10 np0005464214 systemd[1]: libpod-conmon-bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904.scope: Deactivated successfully.
Oct  1 10:17:10 np0005464214 podman[313894]: 2025-10-01 14:17:10.555240549 +0000 UTC m=+0.065368187 container create 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:17:10 np0005464214 systemd[1]: Started libpod-conmon-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope.
Oct  1 10:17:10 np0005464214 podman[313894]: 2025-10-01 14:17:10.526004441 +0000 UTC m=+0.036132149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:17:10 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:17:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:10 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:17:10 np0005464214 podman[313894]: 2025-10-01 14:17:10.678485843 +0000 UTC m=+0.188613511 container init 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:17:10 np0005464214 podman[313894]: 2025-10-01 14:17:10.69537809 +0000 UTC m=+0.205505728 container start 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  1 10:17:10 np0005464214 podman[313894]: 2025-10-01 14:17:10.701000968 +0000 UTC m=+0.211128606 container attach 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:17:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]: {
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "osd_id": 0,
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "type": "bluestore"
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:    },
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "osd_id": 2,
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "type": "bluestore"
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:    },
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "osd_id": 1,
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:        "type": "bluestore"
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]:    }
Oct  1 10:17:11 np0005464214 wonderful_hypatia[313910]: }
Oct  1 10:17:11 np0005464214 systemd[1]: libpod-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope: Deactivated successfully.
Oct  1 10:17:11 np0005464214 podman[313894]: 2025-10-01 14:17:11.749415167 +0000 UTC m=+1.259542865 container died 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:17:11 np0005464214 systemd[1]: libpod-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope: Consumed 1.062s CPU time.
Oct  1 10:17:11 np0005464214 systemd[1]: var-lib-containers-storage-overlay-2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65-merged.mount: Deactivated successfully.
Oct  1 10:17:11 np0005464214 podman[313894]: 2025-10-01 14:17:11.814320058 +0000 UTC m=+1.324447666 container remove 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:17:11 np0005464214 systemd[1]: libpod-conmon-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope: Deactivated successfully.
Oct  1 10:17:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:17:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:17:11 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:17:11 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:17:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 81f334ab-3389-4db8-abf0-6ad5f5cdd1fd does not exist
Oct  1 10:17:11 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 135b6a4f-3d84-41d0-852c-dbc71493a5ca does not exist
Oct  1 10:17:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:17:12.343 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:17:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:17:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:17:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:17:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:17:12 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:17:12 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:17:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:17:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:17:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:19 np0005464214 podman[314008]: 2025-10-01 14:17:19.544269966 +0000 UTC m=+0.084303708 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:17:19 np0005464214 podman[314006]: 2025-10-01 14:17:19.551038281 +0000 UTC m=+0.098059546 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  1 10:17:19 np0005464214 podman[314007]: 2025-10-01 14:17:19.551986531 +0000 UTC m=+0.093915414 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  1 10:17:19 np0005464214 podman[314005]: 2025-10-01 14:17:19.600546023 +0000 UTC m=+0.148404694 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 10:17:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:25 np0005464214 nova_compute[260022]: 2025-10-01 14:17:25.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.369 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:17:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:17:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282818037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:17:31 np0005464214 nova_compute[260022]: 2025-10-01 14:17:31.845 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.059 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.062 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.063 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.145 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.163 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.164 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.164 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.226 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:17:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:17:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997633879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.735 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.740 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.910 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.911 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:17:32 np0005464214 nova_compute[260022]: 2025-10-01 14:17:32.911 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:17:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:36 np0005464214 nova_compute[260022]: 2025-10-01 14:17:36.907 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:36 np0005464214 nova_compute[260022]: 2025-10-01 14:17:36.908 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:36 np0005464214 nova_compute[260022]: 2025-10-01 14:17:36.909 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:17:37 np0005464214 nova_compute[260022]: 2025-10-01 14:17:37.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:38 np0005464214 nova_compute[260022]: 2025-10-01 14:17:38.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:42 np0005464214 nova_compute[260022]: 2025-10-01 14:17:42.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:42 np0005464214 nova_compute[260022]: 2025-10-01 14:17:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:17:42 np0005464214 nova_compute[260022]: 2025-10-01 14:17:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:17:42 np0005464214 nova_compute[260022]: 2025-10-01 14:17:42.437 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:17:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:44 np0005464214 nova_compute[260022]: 2025-10-01 14:17:44.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:45 np0005464214 nova_compute[260022]: 2025-10-01 14:17:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:17:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:17:47
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'images', '.rgw.root', 'default.rgw.control']
Oct  1 10:17:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:17:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:17:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:17:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:50 np0005464214 podman[314127]: 2025-10-01 14:17:50.535049229 +0000 UTC m=+0.074826209 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  1 10:17:50 np0005464214 podman[314128]: 2025-10-01 14:17:50.561687684 +0000 UTC m=+0.089493203 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  1 10:17:50 np0005464214 podman[314126]: 2025-10-01 14:17:50.565905969 +0000 UTC m=+0.111216584 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  1 10:17:50 np0005464214 podman[314129]: 2025-10-01 14:17:50.570585627 +0000 UTC m=+0.093555883 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  1 10:17:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:17:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2478297574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:17:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:17:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2478297574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:17:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:17:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:17:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:17:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:18:00 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1351 writes, 6361 keys, 1351 commit groups, 1.0 writes per commit group, ingest: 8.80 MB, 0.01 MB/s#012Interval WAL: 1351 writes, 1351 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.8      2.65              0.22        31    0.086       0      0       0.0       0.0#012  L6      1/0    6.80 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     64.0     53.1      4.46              0.89        30    0.149    163K    16K       0.0       0.0#012 Sum      1/0    6.80 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     40.1     41.1      7.11              1.10        61    0.117    163K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.8    141.2    137.3      0.41              0.19        12    0.034     38K   3090       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     64.0     53.1      4.46              0.89        30    0.149    163K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     20.9      2.64              0.22        30    0.088       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.07 MB/s write, 0.28 GB read, 0.07 MB/s read, 7.1 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 32.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000291 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2106,31.56 MB,10.3819%) FilterBlock(62,460.05 KB,0.147784%) IndexBlock(62,792.88 KB,0.254701%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  1 10:18:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:18:12.344 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:18:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:18:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:18:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:18:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:18:12 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev ab93582e-5adb-47dd-90a3-10b5a1a31189 does not exist
Oct  1 10:18:12 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 5dc8939c-f424-4d3a-8f88-9e4a41b940ff does not exist
Oct  1 10:18:12 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2c8bfb06-c873-465d-aff1-9fe8b4c6b88f does not exist
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:18:12 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:18:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.394908272 +0000 UTC m=+0.036880932 container create 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:18:13 np0005464214 systemd[1]: Started libpod-conmon-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope.
Oct  1 10:18:13 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.469332966 +0000 UTC m=+0.111305636 container init 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.379362299 +0000 UTC m=+0.021334989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.476945928 +0000 UTC m=+0.118918588 container start 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.480408168 +0000 UTC m=+0.122380838 container attach 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 10:18:13 np0005464214 systemd[1]: libpod-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope: Deactivated successfully.
Oct  1 10:18:13 np0005464214 quirky_mccarthy[314494]: 167 167
Oct  1 10:18:13 np0005464214 conmon[314494]: conmon 7861ac2c3252e103c84d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope/container/memory.events
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.482822995 +0000 UTC m=+0.124795655 container died 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:18:13 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9625170523a709754b9e80c2bb3200bda5d2b2e6039069c784f47c9692f0344d-merged.mount: Deactivated successfully.
Oct  1 10:18:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:18:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:18:13 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:18:13 np0005464214 podman[314477]: 2025-10-01 14:18:13.590414571 +0000 UTC m=+0.232387231 container remove 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 10:18:13 np0005464214 systemd[1]: libpod-conmon-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope: Deactivated successfully.
Oct  1 10:18:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:13 np0005464214 podman[314517]: 2025-10-01 14:18:13.767128104 +0000 UTC m=+0.039121224 container create 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 10:18:13 np0005464214 systemd[1]: Started libpod-conmon-8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5.scope.
Oct  1 10:18:13 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:18:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:13 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:13 np0005464214 podman[314517]: 2025-10-01 14:18:13.833141401 +0000 UTC m=+0.105134541 container init 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  1 10:18:13 np0005464214 podman[314517]: 2025-10-01 14:18:13.837981375 +0000 UTC m=+0.109974495 container start 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 10:18:13 np0005464214 podman[314517]: 2025-10-01 14:18:13.841043012 +0000 UTC m=+0.113036132 container attach 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  1 10:18:13 np0005464214 podman[314517]: 2025-10-01 14:18:13.752506589 +0000 UTC m=+0.024499729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:18:14 np0005464214 friendly_easley[314534]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:18:14 np0005464214 friendly_easley[314534]: --> relative data size: 1.0
Oct  1 10:18:14 np0005464214 friendly_easley[314534]: --> All data devices are unavailable
Oct  1 10:18:14 np0005464214 systemd[1]: libpod-8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5.scope: Deactivated successfully.
Oct  1 10:18:14 np0005464214 podman[314517]: 2025-10-01 14:18:14.788612687 +0000 UTC m=+1.060605807 container died 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:18:14 np0005464214 systemd[1]: var-lib-containers-storage-overlay-69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a-merged.mount: Deactivated successfully.
Oct  1 10:18:14 np0005464214 podman[314517]: 2025-10-01 14:18:14.850368678 +0000 UTC m=+1.122361798 container remove 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:18:14 np0005464214 systemd[1]: libpod-conmon-8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5.scope: Deactivated successfully.
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.435264725 +0000 UTC m=+0.039892867 container create 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 10:18:15 np0005464214 systemd[1]: Started libpod-conmon-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope.
Oct  1 10:18:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.507073326 +0000 UTC m=+0.111701508 container init 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.418626626 +0000 UTC m=+0.023254808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.514184372 +0000 UTC m=+0.118812524 container start 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  1 10:18:15 np0005464214 nifty_faraday[314733]: 167 167
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.517789846 +0000 UTC m=+0.122418048 container attach 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:18:15 np0005464214 systemd[1]: libpod-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope: Deactivated successfully.
Oct  1 10:18:15 np0005464214 conmon[314733]: conmon 82c5b3ec0d91c6079e8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope/container/memory.events
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.519755108 +0000 UTC m=+0.124383280 container died 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:18:15 np0005464214 systemd[1]: var-lib-containers-storage-overlay-aaf49e8e56fff7b3f32fef2ab768580374544c5d57bb6dd157d5132d38d27144-merged.mount: Deactivated successfully.
Oct  1 10:18:15 np0005464214 podman[314716]: 2025-10-01 14:18:15.573593859 +0000 UTC m=+0.178222001 container remove 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:18:15 np0005464214 systemd[1]: libpod-conmon-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope: Deactivated successfully.
Oct  1 10:18:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:15 np0005464214 podman[314758]: 2025-10-01 14:18:15.721487386 +0000 UTC m=+0.039463175 container create 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:18:15 np0005464214 systemd[1]: Started libpod-conmon-70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546.scope.
Oct  1 10:18:15 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:18:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:15 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:15 np0005464214 podman[314758]: 2025-10-01 14:18:15.785499819 +0000 UTC m=+0.103475628 container init 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 10:18:15 np0005464214 podman[314758]: 2025-10-01 14:18:15.795083593 +0000 UTC m=+0.113059382 container start 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  1 10:18:15 np0005464214 podman[314758]: 2025-10-01 14:18:15.703162704 +0000 UTC m=+0.021138523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:18:15 np0005464214 podman[314758]: 2025-10-01 14:18:15.799242585 +0000 UTC m=+0.117218384 container attach 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 10:18:16 np0005464214 cool_pare[314774]: {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:    "0": [
Oct  1 10:18:16 np0005464214 cool_pare[314774]:        {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "devices": [
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "/dev/loop3"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            ],
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_name": "ceph_lv0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_size": "21470642176",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "name": "ceph_lv0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "tags": {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cluster_name": "ceph",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.crush_device_class": "",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.encrypted": "0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osd_id": "0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.type": "block",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.vdo": "0"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            },
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "type": "block",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "vg_name": "ceph_vg0"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:        }
Oct  1 10:18:16 np0005464214 cool_pare[314774]:    ],
Oct  1 10:18:16 np0005464214 cool_pare[314774]:    "1": [
Oct  1 10:18:16 np0005464214 cool_pare[314774]:        {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "devices": [
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "/dev/loop4"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            ],
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_name": "ceph_lv1",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_size": "21470642176",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "name": "ceph_lv1",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "tags": {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cluster_name": "ceph",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.crush_device_class": "",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.encrypted": "0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osd_id": "1",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.type": "block",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.vdo": "0"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            },
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "type": "block",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "vg_name": "ceph_vg1"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:        }
Oct  1 10:18:16 np0005464214 cool_pare[314774]:    ],
Oct  1 10:18:16 np0005464214 cool_pare[314774]:    "2": [
Oct  1 10:18:16 np0005464214 cool_pare[314774]:        {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "devices": [
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "/dev/loop5"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            ],
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_name": "ceph_lv2",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_size": "21470642176",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "name": "ceph_lv2",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "tags": {
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.cluster_name": "ceph",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.crush_device_class": "",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.encrypted": "0",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osd_id": "2",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.type": "block",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:                "ceph.vdo": "0"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            },
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "type": "block",
Oct  1 10:18:16 np0005464214 cool_pare[314774]:            "vg_name": "ceph_vg2"
Oct  1 10:18:16 np0005464214 cool_pare[314774]:        }
Oct  1 10:18:16 np0005464214 cool_pare[314774]:    ]
Oct  1 10:18:16 np0005464214 cool_pare[314774]: }
Oct  1 10:18:16 np0005464214 systemd[1]: libpod-70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546.scope: Deactivated successfully.
Oct  1 10:18:16 np0005464214 podman[314758]: 2025-10-01 14:18:16.516501696 +0000 UTC m=+0.834477495 container died 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  1 10:18:16 np0005464214 systemd[1]: var-lib-containers-storage-overlay-432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6-merged.mount: Deactivated successfully.
Oct  1 10:18:16 np0005464214 podman[314758]: 2025-10-01 14:18:16.58024676 +0000 UTC m=+0.898222569 container remove 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  1 10:18:16 np0005464214 systemd[1]: libpod-conmon-70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546.scope: Deactivated successfully.
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.118742253 +0000 UTC m=+0.039779174 container create 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:18:17 np0005464214 systemd[1]: Started libpod-conmon-444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51.scope.
Oct  1 10:18:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.101403983 +0000 UTC m=+0.022440934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.199274271 +0000 UTC m=+0.120311222 container init 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.20554962 +0000 UTC m=+0.126586541 container start 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.209090693 +0000 UTC m=+0.130127614 container attach 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  1 10:18:17 np0005464214 great_driscoll[314953]: 167 167
Oct  1 10:18:17 np0005464214 systemd[1]: libpod-444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51.scope: Deactivated successfully.
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.212310966 +0000 UTC m=+0.133347917 container died 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  1 10:18:17 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bb4f43684d44a298a0b6d86a0b7347a73e37d6ac3672e34bef05a2d893ca3178-merged.mount: Deactivated successfully.
Oct  1 10:18:17 np0005464214 podman[314936]: 2025-10-01 14:18:17.272051282 +0000 UTC m=+0.193088233 container remove 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:18:17 np0005464214 systemd[1]: libpod-conmon-444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51.scope: Deactivated successfully.
Oct  1 10:18:17 np0005464214 podman[314979]: 2025-10-01 14:18:17.46498018 +0000 UTC m=+0.039436684 container create 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 10:18:17 np0005464214 systemd[1]: Started libpod-conmon-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope.
Oct  1 10:18:17 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:18:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:17 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:18:17 np0005464214 podman[314979]: 2025-10-01 14:18:17.538679081 +0000 UTC m=+0.113135615 container init 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  1 10:18:17 np0005464214 podman[314979]: 2025-10-01 14:18:17.448363842 +0000 UTC m=+0.022820366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:18:17 np0005464214 podman[314979]: 2025-10-01 14:18:17.546074145 +0000 UTC m=+0.120530649 container start 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:18:17 np0005464214 podman[314979]: 2025-10-01 14:18:17.550626321 +0000 UTC m=+0.125082905 container attach 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:18:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:18:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]: {
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "osd_id": 0,
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "type": "bluestore"
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:    },
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "osd_id": 2,
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "type": "bluestore"
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:    },
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "osd_id": 1,
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:        "type": "bluestore"
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]:    }
Oct  1 10:18:18 np0005464214 wizardly_carson[314996]: }
Oct  1 10:18:18 np0005464214 systemd[1]: libpod-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope: Deactivated successfully.
Oct  1 10:18:18 np0005464214 systemd[1]: libpod-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope: Consumed 1.034s CPU time.
Oct  1 10:18:18 np0005464214 podman[314979]: 2025-10-01 14:18:18.572244878 +0000 UTC m=+1.146701382 container died 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  1 10:18:18 np0005464214 systemd[1]: var-lib-containers-storage-overlay-04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11-merged.mount: Deactivated successfully.
Oct  1 10:18:18 np0005464214 podman[314979]: 2025-10-01 14:18:18.633653738 +0000 UTC m=+1.208110242 container remove 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:18:18 np0005464214 systemd[1]: libpod-conmon-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope: Deactivated successfully.
Oct  1 10:18:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:18:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:18:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:18:18 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:18:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev c429f253-72e7-4786-ae89-34cf598d6f88 does not exist
Oct  1 10:18:18 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 47a3353e-4d07-4358-abde-335fdb3247c2 does not exist
Oct  1 10:18:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:18:19 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:18:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:21 np0005464214 podman[315092]: 2025-10-01 14:18:21.519620938 +0000 UTC m=+0.065077198 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923)
Oct  1 10:18:21 np0005464214 podman[315093]: 2025-10-01 14:18:21.521769116 +0000 UTC m=+0.062647100 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 10:18:21 np0005464214 podman[315091]: 2025-10-01 14:18:21.527392794 +0000 UTC m=+0.075330393 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  1 10:18:21 np0005464214 podman[315090]: 2025-10-01 14:18:21.569724309 +0000 UTC m=+0.120920112 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 10:18:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:25 np0005464214 nova_compute[260022]: 2025-10-01 14:18:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.489 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.490 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.490 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.490 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.491 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:18:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:18:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974965730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:18:31 np0005464214 nova_compute[260022]: 2025-10-01 14:18:31.928 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.099 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.101 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4989MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.101 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.102 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.413 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.500 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.501 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.502 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.555 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:18:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:18:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949324787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:18:32 np0005464214 nova_compute[260022]: 2025-10-01 14:18:32.996 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:18:33 np0005464214 nova_compute[260022]: 2025-10-01 14:18:33.002 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:18:33 np0005464214 nova_compute[260022]: 2025-10-01 14:18:33.072 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:18:33 np0005464214 nova_compute[260022]: 2025-10-01 14:18:33.074 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:18:33 np0005464214 nova_compute[260022]: 2025-10-01 14:18:33.075 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:18:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:38 np0005464214 nova_compute[260022]: 2025-10-01 14:18:38.070 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:38 np0005464214 nova_compute[260022]: 2025-10-01 14:18:38.071 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:38 np0005464214 nova_compute[260022]: 2025-10-01 14:18:38.071 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:18:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:38 np0005464214 nova_compute[260022]: 2025-10-01 14:18:38.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:39 np0005464214 nova_compute[260022]: 2025-10-01 14:18:39.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:42 np0005464214 nova_compute[260022]: 2025-10-01 14:18:42.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:42 np0005464214 nova_compute[260022]: 2025-10-01 14:18:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:18:42 np0005464214 nova_compute[260022]: 2025-10-01 14:18:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:18:42 np0005464214 nova_compute[260022]: 2025-10-01 14:18:42.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:18:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:45 np0005464214 nova_compute[260022]: 2025-10-01 14:18:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:46 np0005464214 nova_compute[260022]: 2025-10-01 14:18:46.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:18:47
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms']
Oct  1 10:18:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:18:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:18:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:18:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:52 np0005464214 podman[315216]: 2025-10-01 14:18:52.512925234 +0000 UTC m=+0.063082574 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  1 10:18:52 np0005464214 podman[315215]: 2025-10-01 14:18:52.517744858 +0000 UTC m=+0.065508703 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:18:52 np0005464214 podman[315217]: 2025-10-01 14:18:52.538661861 +0000 UTC m=+0.076272213 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:18:52 np0005464214 podman[315214]: 2025-10-01 14:18:52.550551659 +0000 UTC m=+0.100849444 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:18:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:18:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019926358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:18:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:18:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019926358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:18:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:18:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:18:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:18:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:10 np0005464214 nova_compute[260022]: 2025-10-01 14:19:10.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:19:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:19:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:19:12.346 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:19:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:19:12.346 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:19:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:19:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:19:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:19:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 76926bb3-7b73-4fd6-bb74-67ad34fe2724 does not exist
Oct  1 10:19:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev db623ba0-484d-47cb-8773-37445ccdba3d does not exist
Oct  1 10:19:19 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 18ec3e0c-bcdb-45ea-b7d7-7be58a71bf1a does not exist
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:19:19 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:19:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:19:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:19:20 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.606116808 +0000 UTC m=+0.069238660 container create 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 10:19:20 np0005464214 systemd[1]: Started libpod-conmon-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope.
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.579667248 +0000 UTC m=+0.042789150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:19:20 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.695643801 +0000 UTC m=+0.158765623 container init 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.707819328 +0000 UTC m=+0.170941150 container start 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.712503447 +0000 UTC m=+0.175625259 container attach 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:19:20 np0005464214 wonderful_dirac[315584]: 167 167
Oct  1 10:19:20 np0005464214 systemd[1]: libpod-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope: Deactivated successfully.
Oct  1 10:19:20 np0005464214 conmon[315584]: conmon 38bbe72b486ac323c7f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope/container/memory.events
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.715591315 +0000 UTC m=+0.178713137 container died 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:19:20 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f77f3c0002cc8c4a54a6dc0564c1972af193999cdf2186346d3fcd148b1a1ce1-merged.mount: Deactivated successfully.
Oct  1 10:19:20 np0005464214 podman[315568]: 2025-10-01 14:19:20.766332967 +0000 UTC m=+0.229454809 container remove 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:19:20 np0005464214 systemd[1]: libpod-conmon-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope: Deactivated successfully.
Oct  1 10:19:20 np0005464214 podman[315607]: 
Oct  1 10:19:21 np0005464214 systemd[1]: Started libpod-conmon-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope.
Oct  1 10:19:21 np0005464214 podman[315607]: 2025-10-01 14:19:20.974624873 +0000 UTC m=+0.028007841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:19:21 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:19:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:21 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:21 np0005464214 podman[315607]: 2025-10-01 14:19:21.090456381 +0000 UTC m=+0.143839339 container init 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 10:19:21 np0005464214 podman[315607]: 2025-10-01 14:19:21.10209107 +0000 UTC m=+0.155474018 container start 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:19:21 np0005464214 podman[315607]: 2025-10-01 14:19:21.10678509 +0000 UTC m=+0.160168038 container attach 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:19:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:22 np0005464214 brave_curran[315623]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:19:22 np0005464214 brave_curran[315623]: --> relative data size: 1.0
Oct  1 10:19:22 np0005464214 brave_curran[315623]: --> All data devices are unavailable
Oct  1 10:19:22 np0005464214 systemd[1]: libpod-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope: Deactivated successfully.
Oct  1 10:19:22 np0005464214 systemd[1]: libpod-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope: Consumed 1.086s CPU time.
Oct  1 10:19:22 np0005464214 podman[315607]: 2025-10-01 14:19:22.228141655 +0000 UTC m=+1.281524633 container died 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:19:22 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9-merged.mount: Deactivated successfully.
Oct  1 10:19:22 np0005464214 podman[315607]: 2025-10-01 14:19:22.297915751 +0000 UTC m=+1.351298739 container remove 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  1 10:19:22 np0005464214 systemd[1]: libpod-conmon-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope: Deactivated successfully.
Oct  1 10:19:22 np0005464214 podman[315741]: 2025-10-01 14:19:22.687711751 +0000 UTC m=+0.070287363 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:19:22 np0005464214 podman[315743]: 2025-10-01 14:19:22.687897587 +0000 UTC m=+0.071224193 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923)
Oct  1 10:19:22 np0005464214 podman[315742]: 2025-10-01 14:19:22.709467742 +0000 UTC m=+0.093695196 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  1 10:19:22 np0005464214 podman[315740]: 2025-10-01 14:19:22.711669272 +0000 UTC m=+0.095309999 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.038770481 +0000 UTC m=+0.061511355 container create 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  1 10:19:23 np0005464214 systemd[1]: Started libpod-conmon-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope.
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.007684384 +0000 UTC m=+0.030425318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:19:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.205137745 +0000 UTC m=+0.227878679 container init 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.216210486 +0000 UTC m=+0.238951330 container start 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:19:23 np0005464214 romantic_chaplygin[315904]: 167 167
Oct  1 10:19:23 np0005464214 systemd[1]: libpod-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope: Deactivated successfully.
Oct  1 10:19:23 np0005464214 conmon[315904]: conmon 37768986a38727dc018a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope/container/memory.events
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.226872065 +0000 UTC m=+0.249612969 container attach 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.229148878 +0000 UTC m=+0.251889732 container died 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:19:23 np0005464214 systemd[1]: var-lib-containers-storage-overlay-89c33736adf664b5573e997b7a65d1df2e092a9ceeadc73aa175c9b53fa83dbb-merged.mount: Deactivated successfully.
Oct  1 10:19:23 np0005464214 podman[315888]: 2025-10-01 14:19:23.299823472 +0000 UTC m=+0.322564316 container remove 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:19:23 np0005464214 systemd[1]: libpod-conmon-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope: Deactivated successfully.
Oct  1 10:19:23 np0005464214 podman[315931]: 2025-10-01 14:19:23.460786065 +0000 UTC m=+0.040730546 container create 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  1 10:19:23 np0005464214 systemd[1]: Started libpod-conmon-3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e.scope.
Oct  1 10:19:23 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:19:23 np0005464214 podman[315931]: 2025-10-01 14:19:23.444651512 +0000 UTC m=+0.024596013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:19:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:23 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:23 np0005464214 podman[315931]: 2025-10-01 14:19:23.557162485 +0000 UTC m=+0.137107016 container init 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 10:19:23 np0005464214 podman[315931]: 2025-10-01 14:19:23.572026657 +0000 UTC m=+0.151971148 container start 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:19:23 np0005464214 podman[315931]: 2025-10-01 14:19:23.575939832 +0000 UTC m=+0.155884423 container attach 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 10:19:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]: {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:    "0": [
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:        {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "devices": [
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "/dev/loop3"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            ],
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_name": "ceph_lv0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_size": "21470642176",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "name": "ceph_lv0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "tags": {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cluster_name": "ceph",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.crush_device_class": "",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.encrypted": "0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osd_id": "0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.type": "block",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.vdo": "0"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            },
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "type": "block",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "vg_name": "ceph_vg0"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:        }
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:    ],
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:    "1": [
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:        {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "devices": [
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "/dev/loop4"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            ],
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_name": "ceph_lv1",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_size": "21470642176",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "name": "ceph_lv1",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "tags": {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cluster_name": "ceph",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.crush_device_class": "",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.encrypted": "0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osd_id": "1",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.type": "block",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.vdo": "0"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            },
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "type": "block",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "vg_name": "ceph_vg1"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:        }
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:    ],
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:    "2": [
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:        {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "devices": [
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "/dev/loop5"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            ],
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_name": "ceph_lv2",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_size": "21470642176",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "name": "ceph_lv2",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "tags": {
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.cluster_name": "ceph",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.crush_device_class": "",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.encrypted": "0",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osd_id": "2",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.type": "block",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:                "ceph.vdo": "0"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            },
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "type": "block",
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:            "vg_name": "ceph_vg2"
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:        }
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]:    ]
Oct  1 10:19:24 np0005464214 condescending_mayer[315948]: }
Oct  1 10:19:24 np0005464214 systemd[1]: libpod-3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e.scope: Deactivated successfully.
Oct  1 10:19:24 np0005464214 podman[315931]: 2025-10-01 14:19:24.385752591 +0000 UTC m=+0.965697093 container died 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:19:24 np0005464214 systemd[1]: var-lib-containers-storage-overlay-ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c-merged.mount: Deactivated successfully.
Oct  1 10:19:24 np0005464214 podman[315931]: 2025-10-01 14:19:24.443563478 +0000 UTC m=+1.023507959 container remove 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:19:24 np0005464214 systemd[1]: libpod-conmon-3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e.scope: Deactivated successfully.
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.239808607 +0000 UTC m=+0.069421256 container create 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:19:25 np0005464214 systemd[1]: Started libpod-conmon-10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7.scope.
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.214686529 +0000 UTC m=+0.044299248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:19:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.325854881 +0000 UTC m=+0.155467610 container init 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.337805709 +0000 UTC m=+0.167418338 container start 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.342388636 +0000 UTC m=+0.172001315 container attach 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  1 10:19:25 np0005464214 distracted_khorana[316127]: 167 167
Oct  1 10:19:25 np0005464214 systemd[1]: libpod-10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7.scope: Deactivated successfully.
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.344845434 +0000 UTC m=+0.174458113 container died 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:19:25 np0005464214 systemd[1]: var-lib-containers-storage-overlay-9f610ea00ad54d50ae03def47842198f98c042a33cc1563103edd939d3f096b6-merged.mount: Deactivated successfully.
Oct  1 10:19:25 np0005464214 podman[316111]: 2025-10-01 14:19:25.387309762 +0000 UTC m=+0.216922421 container remove 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:19:25 np0005464214 systemd[1]: libpod-conmon-10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7.scope: Deactivated successfully.
Oct  1 10:19:25 np0005464214 podman[316150]: 2025-10-01 14:19:25.638753648 +0000 UTC m=+0.059521082 container create 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 10:19:25 np0005464214 systemd[1]: Started libpod-conmon-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope.
Oct  1 10:19:25 np0005464214 podman[316150]: 2025-10-01 14:19:25.618926008 +0000 UTC m=+0.039693552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:19:25 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:19:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:25 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:19:25 np0005464214 podman[316150]: 2025-10-01 14:19:25.737321349 +0000 UTC m=+0.158088873 container init 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:19:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:25 np0005464214 podman[316150]: 2025-10-01 14:19:25.753559654 +0000 UTC m=+0.174327148 container start 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:19:25 np0005464214 podman[316150]: 2025-10-01 14:19:25.75753975 +0000 UTC m=+0.178307234 container attach 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]: {
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "osd_id": 0,
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "type": "bluestore"
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:    },
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "osd_id": 2,
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "type": "bluestore"
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:    },
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "osd_id": 1,
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:        "type": "bluestore"
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]:    }
Oct  1 10:19:26 np0005464214 elastic_cartwright[316166]: }
Oct  1 10:19:26 np0005464214 systemd[1]: libpod-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope: Deactivated successfully.
Oct  1 10:19:26 np0005464214 podman[316150]: 2025-10-01 14:19:26.849861723 +0000 UTC m=+1.270629237 container died 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  1 10:19:26 np0005464214 systemd[1]: libpod-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope: Consumed 1.103s CPU time.
Oct  1 10:19:26 np0005464214 systemd[1]: var-lib-containers-storage-overlay-e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9-merged.mount: Deactivated successfully.
Oct  1 10:19:26 np0005464214 podman[316150]: 2025-10-01 14:19:26.928876993 +0000 UTC m=+1.349644477 container remove 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 10:19:26 np0005464214 systemd[1]: libpod-conmon-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope: Deactivated successfully.
Oct  1 10:19:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:19:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:19:26 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:19:26 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:19:26 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2f972b3c-ed96-48de-86f1-09b9760b37b9 does not exist
Oct  1 10:19:26 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 95191127-b778-4538-b244-7fdfe0c91661 does not exist
Oct  1 10:19:27 np0005464214 nova_compute[260022]: 2025-10-01 14:19:27.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:19:27 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:19:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.467848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368467897, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 251, "total_data_size": 2909117, "memory_usage": 2957648, "flush_reason": "Manual Compaction"}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368486688, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2859320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44903, "largest_seqno": 46674, "table_properties": {"data_size": 2851079, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16443, "raw_average_key_size": 19, "raw_value_size": 2834800, "raw_average_value_size": 3444, "num_data_blocks": 225, "num_entries": 823, "num_filter_entries": 823, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328174, "oldest_key_time": 1759328174, "file_creation_time": 1759328368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 18943 microseconds, and 10566 cpu microseconds.
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.486789) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2859320 bytes OK
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.486811) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.488769) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.488783) EVENT_LOG_v1 {"time_micros": 1759328368488779, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.488801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2901574, prev total WAL file size 2901574, number of live WAL files 2.
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.489687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2792KB)], [107(6963KB)]
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368489712, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9990000, "oldest_snapshot_seqno": -1}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6205 keys, 8225608 bytes, temperature: kUnknown
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368534272, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 8225608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8185813, "index_size": 23173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 161340, "raw_average_key_size": 26, "raw_value_size": 8075008, "raw_average_value_size": 1301, "num_data_blocks": 916, "num_entries": 6205, "num_filter_entries": 6205, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.534629) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 8225608 bytes
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.536389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.1 rd, 183.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 6.8 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6719, records dropped: 514 output_compression: NoCompression
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.536405) EVENT_LOG_v1 {"time_micros": 1759328368536397, "job": 64, "event": "compaction_finished", "compaction_time_micros": 44771, "compaction_time_cpu_micros": 26260, "output_level": 6, "num_output_files": 1, "total_output_size": 8225608, "num_input_records": 6719, "num_output_records": 6205, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368537348, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368539046, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.489628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:19:28 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:19:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:31 np0005464214 nova_compute[260022]: 2025-10-01 14:19:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:31 np0005464214 nova_compute[260022]: 2025-10-01 14:19:31.845 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:19:31 np0005464214 nova_compute[260022]: 2025-10-01 14:19:31.845 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:19:31 np0005464214 nova_compute[260022]: 2025-10-01 14:19:31.846 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:19:31 np0005464214 nova_compute[260022]: 2025-10-01 14:19:31.846 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:19:31 np0005464214 nova_compute[260022]: 2025-10-01 14:19:31.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:19:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:19:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557468855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.307 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.503 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.505 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4978MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.505 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.505 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.643 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.660 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.661 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.661 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:19:32 np0005464214 nova_compute[260022]: 2025-10-01 14:19:32.714 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:19:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:19:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685988063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:19:33 np0005464214 nova_compute[260022]: 2025-10-01 14:19:33.186 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:19:33 np0005464214 nova_compute[260022]: 2025-10-01 14:19:33.194 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:19:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:34 np0005464214 nova_compute[260022]: 2025-10-01 14:19:34.628 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:19:34 np0005464214 nova_compute[260022]: 2025-10-01 14:19:34.631 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:19:34 np0005464214 nova_compute[260022]: 2025-10-01 14:19:34.632 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:19:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:40 np0005464214 nova_compute[260022]: 2025-10-01 14:19:40.627 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:40 np0005464214 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:40 np0005464214 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:40 np0005464214 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:40 np0005464214 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:19:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:44 np0005464214 nova_compute[260022]: 2025-10-01 14:19:44.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:44 np0005464214 nova_compute[260022]: 2025-10-01 14:19:44.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:19:44 np0005464214 nova_compute[260022]: 2025-10-01 14:19:44.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:19:44 np0005464214 nova_compute[260022]: 2025-10-01 14:19:44.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:19:45 np0005464214 nova_compute[260022]: 2025-10-01 14:19:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:47 np0005464214 nova_compute[260022]: 2025-10-01 14:19:47.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:19:47
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Oct  1 10:19:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:19:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:19:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:19:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:53 np0005464214 podman[316310]: 2025-10-01 14:19:53.548635134 +0000 UTC m=+0.084732031 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Oct  1 10:19:53 np0005464214 podman[316317]: 2025-10-01 14:19:53.560621925 +0000 UTC m=+0.087424298 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, managed_by=edpm_ansible)
Oct  1 10:19:53 np0005464214 podman[316311]: 2025-10-01 14:19:53.566516433 +0000 UTC m=+0.101653770 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 10:19:53 np0005464214 podman[316309]: 2025-10-01 14:19:53.566687748 +0000 UTC m=+0.114609271 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct  1 10:19:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:19:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165582044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:19:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:19:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165582044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:19:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:19:55 np0005464214 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 8418 writes, 30K keys, 8418 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8418 writes, 2178 syncs, 3.87 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 440 writes, 1113 keys, 440 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s#012Interval WAL: 440 writes, 206 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:19:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:19:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:19:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:20:00 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9928 writes, 35K keys, 9928 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9928 writes, 2633 syncs, 3.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 502 writes, 1216 keys, 502 commit groups, 1.0 writes per commit group, ingest: 0.57 MB, 0.00 MB/s#012Interval WAL: 502 writes, 222 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:20:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:20:05 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 8971 writes, 31K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8971 writes, 2398 syncs, 3.74 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 554 writes, 1220 keys, 554 commit groups, 1.0 writes per commit group, ingest: 0.58 MB, 0.00 MB/s#012Interval WAL: 554 writes, 253 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:20:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:07 np0005464214 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct  1 10:20:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:20:12.347 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:20:12.347 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:20:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:20:12.347 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:20:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:20:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:20:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:24 np0005464214 podman[316391]: 2025-10-01 14:20:24.530043059 +0000 UTC m=+0.077663158 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:20:24 np0005464214 podman[316392]: 2025-10-01 14:20:24.537500365 +0000 UTC m=+0.081259922 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  1 10:20:24 np0005464214 podman[316389]: 2025-10-01 14:20:24.558597386 +0000 UTC m=+0.117951638 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:20:24 np0005464214 podman[316390]: 2025-10-01 14:20:24.570865955 +0000 UTC m=+0.127459199 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  1 10:20:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct  1 10:20:27 np0005464214 nova_compute[260022]: 2025-10-01 14:20:27.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 511 B/s wr, 9 op/s
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:20:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 068ea4d9-7946-48c2-9899-c4136cbee931 does not exist
Oct  1 10:20:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 65c59bf9-32ec-479a-8ad9-ebd40a906f8f does not exist
Oct  1 10:20:27 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 2ec48a62-fd3a-4f1b-b410-722801d3ae01 does not exist
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:20:27 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:20:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:20:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:20:28 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:20:28 np0005464214 podman[316741]: 2025-10-01 14:20:28.650973772 +0000 UTC m=+0.045702113 container create ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  1 10:20:28 np0005464214 podman[316741]: 2025-10-01 14:20:28.627412693 +0000 UTC m=+0.022141034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:20:28 np0005464214 systemd[1]: Started libpod-conmon-ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56.scope.
Oct  1 10:20:28 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:20:28 np0005464214 podman[316741]: 2025-10-01 14:20:28.814107313 +0000 UTC m=+0.208835714 container init ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:20:28 np0005464214 podman[316741]: 2025-10-01 14:20:28.82345995 +0000 UTC m=+0.218188261 container start ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  1 10:20:28 np0005464214 mystifying_turing[316757]: 167 167
Oct  1 10:20:28 np0005464214 systemd[1]: libpod-ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56.scope: Deactivated successfully.
Oct  1 10:20:28 np0005464214 podman[316741]: 2025-10-01 14:20:28.847897727 +0000 UTC m=+0.242626038 container attach ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:20:28 np0005464214 podman[316741]: 2025-10-01 14:20:28.848874767 +0000 UTC m=+0.243603098 container died ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:20:29 np0005464214 systemd[1]: var-lib-containers-storage-overlay-001acc981824d0ebd8d6e6e9b046284350bb075501c28076a055f073c2e157ed-merged.mount: Deactivated successfully.
Oct  1 10:20:29 np0005464214 podman[316741]: 2025-10-01 14:20:29.350053685 +0000 UTC m=+0.744782036 container remove ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:20:29 np0005464214 systemd[1]: libpod-conmon-ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56.scope: Deactivated successfully.
Oct  1 10:20:29 np0005464214 podman[316783]: 2025-10-01 14:20:29.636869325 +0000 UTC m=+0.102436365 container create 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  1 10:20:29 np0005464214 podman[316783]: 2025-10-01 14:20:29.578548643 +0000 UTC m=+0.044115673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:20:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 511 B/s wr, 9 op/s
Oct  1 10:20:29 np0005464214 systemd[1]: Started libpod-conmon-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope.
Oct  1 10:20:29 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:20:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:29 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:29 np0005464214 podman[316783]: 2025-10-01 14:20:29.965914055 +0000 UTC m=+0.431481125 container init 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 10:20:29 np0005464214 podman[316783]: 2025-10-01 14:20:29.97739375 +0000 UTC m=+0.442960790 container start 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:20:29 np0005464214 podman[316783]: 2025-10-01 14:20:29.990167586 +0000 UTC m=+0.455734636 container attach 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:20:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct  1 10:20:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct  1 10:20:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct  1 10:20:31 np0005464214 compassionate_black[316799]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:20:31 np0005464214 compassionate_black[316799]: --> relative data size: 1.0
Oct  1 10:20:31 np0005464214 compassionate_black[316799]: --> All data devices are unavailable
Oct  1 10:20:31 np0005464214 systemd[1]: libpod-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope: Deactivated successfully.
Oct  1 10:20:31 np0005464214 podman[316783]: 2025-10-01 14:20:31.238201364 +0000 UTC m=+1.703768414 container died 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  1 10:20:31 np0005464214 systemd[1]: libpod-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope: Consumed 1.004s CPU time.
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.380 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.381 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:20:31 np0005464214 systemd[1]: var-lib-containers-storage-overlay-4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4-merged.mount: Deactivated successfully.
Oct  1 10:20:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 639 B/s wr, 11 op/s
Oct  1 10:20:31 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:20:31 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250697024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:20:31 np0005464214 podman[316783]: 2025-10-01 14:20:31.893566229 +0000 UTC m=+2.359133269 container remove 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  1 10:20:31 np0005464214 nova_compute[260022]: 2025-10-01 14:20:31.901 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:20:31 np0005464214 systemd[1]: libpod-conmon-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope: Deactivated successfully.
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.070 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.072 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.072 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.073 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.176 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.190 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.190 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.191 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.300 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:20:32 np0005464214 podman[317024]: 2025-10-01 14:20:32.587645194 +0000 UTC m=+0.024500669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:20:32 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:20:32 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665305703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:20:32 np0005464214 podman[317024]: 2025-10-01 14:20:32.797896241 +0000 UTC m=+0.234751626 container create c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.820 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.826 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.841 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.843 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:20:32 np0005464214 nova_compute[260022]: 2025-10-01 14:20:32.844 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:20:33 np0005464214 systemd[1]: Started libpod-conmon-c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0.scope.
Oct  1 10:20:33 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:20:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:33 np0005464214 podman[317024]: 2025-10-01 14:20:33.158487604 +0000 UTC m=+0.595343089 container init c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct  1 10:20:33 np0005464214 podman[317024]: 2025-10-01 14:20:33.170571528 +0000 UTC m=+0.607426953 container start c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  1 10:20:33 np0005464214 modest_goldberg[317042]: 167 167
Oct  1 10:20:33 np0005464214 systemd[1]: libpod-c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0.scope: Deactivated successfully.
Oct  1 10:20:33 np0005464214 podman[317024]: 2025-10-01 14:20:33.190845911 +0000 UTC m=+0.627701306 container attach c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  1 10:20:33 np0005464214 podman[317024]: 2025-10-01 14:20:33.192426332 +0000 UTC m=+0.629281747 container died c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 10:20:33 np0005464214 nova_compute[260022]: 2025-10-01 14:20:33.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:33 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f2f7bb72f0f87f91d28bbe8972218c2f14e9f6ce908fdf62d005724a8aedbd74-merged.mount: Deactivated successfully.
Oct  1 10:20:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.6 KiB/s wr, 53 op/s
Oct  1 10:20:33 np0005464214 podman[317024]: 2025-10-01 14:20:33.955045812 +0000 UTC m=+1.391901237 container remove c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  1 10:20:33 np0005464214 systemd[1]: libpod-conmon-c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0.scope: Deactivated successfully.
Oct  1 10:20:34 np0005464214 podman[317066]: 2025-10-01 14:20:34.166010043 +0000 UTC m=+0.029063574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:20:34 np0005464214 podman[317066]: 2025-10-01 14:20:34.375053142 +0000 UTC m=+0.238106663 container create 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  1 10:20:34 np0005464214 systemd[1]: Started libpod-conmon-29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a.scope.
Oct  1 10:20:34 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:20:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:34 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:34 np0005464214 podman[317066]: 2025-10-01 14:20:34.816935758 +0000 UTC m=+0.679989319 container init 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  1 10:20:34 np0005464214 podman[317066]: 2025-10-01 14:20:34.825827519 +0000 UTC m=+0.688881040 container start 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:20:34 np0005464214 podman[317066]: 2025-10-01 14:20:34.927085255 +0000 UTC m=+0.790138826 container attach 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]: {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:    "0": [
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:        {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "devices": [
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "/dev/loop3"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            ],
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_name": "ceph_lv0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_size": "21470642176",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "name": "ceph_lv0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "tags": {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cluster_name": "ceph",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.crush_device_class": "",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.encrypted": "0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osd_id": "0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.type": "block",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.vdo": "0"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            },
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "type": "block",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "vg_name": "ceph_vg0"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:        }
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:    ],
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:    "1": [
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:        {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "devices": [
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "/dev/loop4"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            ],
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_name": "ceph_lv1",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_size": "21470642176",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "name": "ceph_lv1",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "tags": {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cluster_name": "ceph",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.crush_device_class": "",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.encrypted": "0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osd_id": "1",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.type": "block",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.vdo": "0"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            },
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "type": "block",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "vg_name": "ceph_vg1"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:        }
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:    ],
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:    "2": [
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:        {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "devices": [
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "/dev/loop5"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            ],
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_name": "ceph_lv2",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_size": "21470642176",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "name": "ceph_lv2",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "tags": {
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.cluster_name": "ceph",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.crush_device_class": "",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.encrypted": "0",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osd_id": "2",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.type": "block",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:                "ceph.vdo": "0"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            },
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "type": "block",
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:            "vg_name": "ceph_vg2"
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:        }
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]:    ]
Oct  1 10:20:35 np0005464214 stupefied_lehmann[317082]: }
Oct  1 10:20:35 np0005464214 systemd[1]: libpod-29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a.scope: Deactivated successfully.
Oct  1 10:20:35 np0005464214 podman[317066]: 2025-10-01 14:20:35.642537459 +0000 UTC m=+1.505590960 container died 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  1 10:20:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Oct  1 10:20:35 np0005464214 systemd[1]: var-lib-containers-storage-overlay-65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5-merged.mount: Deactivated successfully.
Oct  1 10:20:36 np0005464214 podman[317066]: 2025-10-01 14:20:36.141249708 +0000 UTC m=+2.004303189 container remove 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  1 10:20:36 np0005464214 systemd[1]: libpod-conmon-29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a.scope: Deactivated successfully.
Oct  1 10:20:36 np0005464214 podman[317246]: 2025-10-01 14:20:36.777089732 +0000 UTC m=+0.044292908 container create 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:20:36 np0005464214 systemd[1]: Started libpod-conmon-5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43.scope.
Oct  1 10:20:36 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:20:36 np0005464214 podman[317246]: 2025-10-01 14:20:36.756281482 +0000 UTC m=+0.023484688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:20:36 np0005464214 podman[317246]: 2025-10-01 14:20:36.919255798 +0000 UTC m=+0.186458994 container init 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 10:20:36 np0005464214 podman[317246]: 2025-10-01 14:20:36.925356511 +0000 UTC m=+0.192559717 container start 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:20:36 np0005464214 sleepy_ishizaka[317262]: 167 167
Oct  1 10:20:36 np0005464214 systemd[1]: libpod-5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43.scope: Deactivated successfully.
Oct  1 10:20:36 np0005464214 podman[317246]: 2025-10-01 14:20:36.960163047 +0000 UTC m=+0.227366293 container attach 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:20:36 np0005464214 podman[317246]: 2025-10-01 14:20:36.961139558 +0000 UTC m=+0.228342774 container died 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:20:37 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b16f77468f2f778b429aa8fbc18849475fcd6b45cd737a0fb7ad21f92aa088f7-merged.mount: Deactivated successfully.
Oct  1 10:20:37 np0005464214 podman[317246]: 2025-10-01 14:20:37.389446692 +0000 UTC m=+0.656649918 container remove 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 10:20:37 np0005464214 systemd[1]: libpod-conmon-5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43.scope: Deactivated successfully.
Oct  1 10:20:37 np0005464214 podman[317286]: 2025-10-01 14:20:37.660936765 +0000 UTC m=+0.067498305 container create 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:20:37 np0005464214 systemd[1]: Started libpod-conmon-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope.
Oct  1 10:20:37 np0005464214 podman[317286]: 2025-10-01 14:20:37.634227056 +0000 UTC m=+0.040788646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:20:37 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:20:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:37 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:20:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct  1 10:20:37 np0005464214 podman[317286]: 2025-10-01 14:20:37.787176444 +0000 UTC m=+0.193738094 container init 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:20:37 np0005464214 podman[317286]: 2025-10-01 14:20:37.802632895 +0000 UTC m=+0.209194475 container start 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  1 10:20:37 np0005464214 podman[317286]: 2025-10-01 14:20:37.807561451 +0000 UTC m=+0.214123001 container attach 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:20:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct  1 10:20:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct  1 10:20:38 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct  1 10:20:38 np0005464214 nova_compute[260022]: 2025-10-01 14:20:38.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:38 np0005464214 nova_compute[260022]: 2025-10-01 14:20:38.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]: {
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "osd_id": 0,
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "type": "bluestore"
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:    },
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "osd_id": 2,
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "type": "bluestore"
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:    },
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "osd_id": 1,
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:        "type": "bluestore"
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]:    }
Oct  1 10:20:38 np0005464214 musing_grothendieck[317302]: }
Oct  1 10:20:38 np0005464214 systemd[1]: libpod-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope: Deactivated successfully.
Oct  1 10:20:38 np0005464214 podman[317286]: 2025-10-01 14:20:38.896573919 +0000 UTC m=+1.303135449 container died 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  1 10:20:38 np0005464214 systemd[1]: libpod-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope: Consumed 1.098s CPU time.
Oct  1 10:20:38 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e-merged.mount: Deactivated successfully.
Oct  1 10:20:38 np0005464214 podman[317286]: 2025-10-01 14:20:38.967951296 +0000 UTC m=+1.374512876 container remove 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:20:38 np0005464214 systemd[1]: libpod-conmon-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope: Deactivated successfully.
Oct  1 10:20:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:20:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:20:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:20:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:20:39 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 24b49fd7-4591-40bd-a9b7-75ed0ad77501 does not exist
Oct  1 10:20:39 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev bfb405d2-899e-4cd8-b290-737ea42c04ac does not exist
Oct  1 10:20:39 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:20:39 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:20:39 np0005464214 nova_compute[260022]: 2025-10-01 14:20:39.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:39 np0005464214 nova_compute[260022]: 2025-10-01 14:20:39.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:20:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.5 KiB/s wr, 45 op/s
Oct  1 10:20:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct  1 10:20:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct  1 10:20:40 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct  1 10:20:40 np0005464214 nova_compute[260022]: 2025-10-01 14:20:40.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 895 B/s wr, 9 op/s
Oct  1 10:20:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Oct  1 10:20:45 np0005464214 nova_compute[260022]: 2025-10-01 14:20:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:45 np0005464214 nova_compute[260022]: 2025-10-01 14:20:45.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:20:45 np0005464214 nova_compute[260022]: 2025-10-01 14:20:45.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:20:45 np0005464214 nova_compute[260022]: 2025-10-01 14:20:45.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:20:45 np0005464214 nova_compute[260022]: 2025-10-01 14:20:45.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:20:47
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct  1 10:20:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:20:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:20:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:20:49 np0005464214 nova_compute[260022]: 2025-10-01 14:20:49.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Oct  1 10:20:50 np0005464214 nova_compute[260022]: 2025-10-01 14:20:50.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:50 np0005464214 nova_compute[260022]: 2025-10-01 14:20:50.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  1 10:20:50 np0005464214 nova_compute[260022]: 2025-10-01 14:20:50.385 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  1 10:20:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Oct  1 10:20:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Oct  1 10:20:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:20:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1674213025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:20:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:20:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1674213025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:20:55 np0005464214 podman[317402]: 2025-10-01 14:20:55.508527108 +0000 UTC m=+0.060151152 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  1 10:20:55 np0005464214 podman[317401]: 2025-10-01 14:20:55.510341725 +0000 UTC m=+0.065223053 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  1 10:20:55 np0005464214 podman[317400]: 2025-10-01 14:20:55.511693157 +0000 UTC m=+0.067157884 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Oct  1 10:20:55 np0005464214 podman[317399]: 2025-10-01 14:20:55.54295041 +0000 UTC m=+0.096365651 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:20:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:57 np0005464214 nova_compute[260022]: 2025-10-01 14:20:57.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:20:57 np0005464214 nova_compute[260022]: 2025-10-01 14:20:57.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033296094614833626 of space, bias 1.0, pg target 0.09988828384450088 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:20:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:20:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:20:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct  1 10:21:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct  1 10:21:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct  1 10:21:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:21:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:21:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:21:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:21:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:21:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:21:12 np0005464214 nova_compute[260022]: 2025-10-01 14:21:12.370 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 10:21:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:21:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:21:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct  1 10:21:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct  1 10:21:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct  1 10:21:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:26 np0005464214 podman[317486]: 2025-10-01 14:21:26.544029013 +0000 UTC m=+0.078132042 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  1 10:21:26 np0005464214 podman[317484]: 2025-10-01 14:21:26.546723409 +0000 UTC m=+0.092161648 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  1 10:21:26 np0005464214 podman[317485]: 2025-10-01 14:21:26.554524796 +0000 UTC m=+0.091994592 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct  1 10:21:26 np0005464214 podman[317483]: 2025-10-01 14:21:26.613627303 +0000 UTC m=+0.165724604 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  1 10:21:27 np0005464214 nova_compute[260022]: 2025-10-01 14:21:27.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct  1 10:21:30 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct  1 10:21:30 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct  1 10:21:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.451 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.451 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.452 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.452 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.452 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:21:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  1 10:21:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:21:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3858678211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:21:33 np0005464214 nova_compute[260022]: 2025-10-01 14:21:33.923 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.084 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.085 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5037MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.086 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.086 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.419 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.432 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.433 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.433 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.483 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.499 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.499 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.513 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.533 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  1 10:21:34 np0005464214 nova_compute[260022]: 2025-10-01 14:21:34.580 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:21:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:21:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429763371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:21:35 np0005464214 nova_compute[260022]: 2025-10-01 14:21:35.010 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:21:35 np0005464214 nova_compute[260022]: 2025-10-01 14:21:35.017 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:21:35 np0005464214 nova_compute[260022]: 2025-10-01 14:21:35.035 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:21:35 np0005464214 nova_compute[260022]: 2025-10-01 14:21:35.038 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:21:35 np0005464214 nova_compute[260022]: 2025-10-01 14:21:35.038 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:21:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  1 10:21:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  1 10:21:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct  1 10:21:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct  1 10:21:38 np0005464214 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct  1 10:21:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct  1 10:21:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:21:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:21:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:21:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:21:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:21:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9029208a-dee8-4081-99b1-21351f93bea9 does not exist
Oct  1 10:21:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 41b36492-6732-4eb0-adbf-9a19b9e81f27 does not exist
Oct  1 10:21:40 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 43047987-5d22-4629-973f-a2b492c745f2 does not exist
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:21:40 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:21:40 np0005464214 podman[317885]: 2025-10-01 14:21:40.692672991 +0000 UTC m=+0.060521562 container create 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 10:21:40 np0005464214 podman[317885]: 2025-10-01 14:21:40.651882666 +0000 UTC m=+0.019731247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:21:40 np0005464214 systemd[1]: Started libpod-conmon-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope.
Oct  1 10:21:40 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:21:40 np0005464214 podman[317885]: 2025-10-01 14:21:40.810766152 +0000 UTC m=+0.178614743 container init 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 10:21:40 np0005464214 podman[317885]: 2025-10-01 14:21:40.819797589 +0000 UTC m=+0.187646200 container start 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:21:40 np0005464214 pedantic_lederberg[317901]: 167 167
Oct  1 10:21:40 np0005464214 systemd[1]: libpod-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope: Deactivated successfully.
Oct  1 10:21:40 np0005464214 conmon[317901]: conmon 50281c53abcd58ba1087 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope/container/memory.events
Oct  1 10:21:40 np0005464214 podman[317885]: 2025-10-01 14:21:40.885533687 +0000 UTC m=+0.253382298 container attach 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 10:21:40 np0005464214 podman[317885]: 2025-10-01 14:21:40.886303702 +0000 UTC m=+0.254152343 container died 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  1 10:21:40 np0005464214 systemd[1]: var-lib-containers-storage-overlay-8921d78b4a09f3b523ff22f2ec2b42046f150808713db59e908070c2fe207687-merged.mount: Deactivated successfully.
Oct  1 10:21:41 np0005464214 podman[317885]: 2025-10-01 14:21:41.016367622 +0000 UTC m=+0.384216193 container remove 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:21:41 np0005464214 systemd[1]: libpod-conmon-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope: Deactivated successfully.
Oct  1 10:21:41 np0005464214 nova_compute[260022]: 2025-10-01 14:21:41.034 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:41 np0005464214 nova_compute[260022]: 2025-10-01 14:21:41.036 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:41 np0005464214 nova_compute[260022]: 2025-10-01 14:21:41.037 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:41 np0005464214 nova_compute[260022]: 2025-10-01 14:21:41.037 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:21:41 np0005464214 podman[317927]: 2025-10-01 14:21:41.20648785 +0000 UTC m=+0.052647633 container create 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  1 10:21:41 np0005464214 systemd[1]: Started libpod-conmon-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope.
Oct  1 10:21:41 np0005464214 podman[317927]: 2025-10-01 14:21:41.183603504 +0000 UTC m=+0.029763337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:21:41 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:21:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:41 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:41 np0005464214 podman[317927]: 2025-10-01 14:21:41.317140105 +0000 UTC m=+0.163299958 container init 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  1 10:21:41 np0005464214 podman[317927]: 2025-10-01 14:21:41.330274492 +0000 UTC m=+0.176434295 container start 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  1 10:21:41 np0005464214 podman[317927]: 2025-10-01 14:21:41.336764748 +0000 UTC m=+0.182924581 container attach 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  1 10:21:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  1 10:21:42 np0005464214 nova_compute[260022]: 2025-10-01 14:21:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:42 np0005464214 admiring_cartwright[317943]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:21:42 np0005464214 admiring_cartwright[317943]: --> relative data size: 1.0
Oct  1 10:21:42 np0005464214 admiring_cartwright[317943]: --> All data devices are unavailable
Oct  1 10:21:42 np0005464214 systemd[1]: libpod-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope: Deactivated successfully.
Oct  1 10:21:42 np0005464214 systemd[1]: libpod-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope: Consumed 1.061s CPU time.
Oct  1 10:21:42 np0005464214 podman[317972]: 2025-10-01 14:21:42.496969456 +0000 UTC m=+0.039010820 container died 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 10:21:42 np0005464214 systemd[1]: var-lib-containers-storage-overlay-0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e-merged.mount: Deactivated successfully.
Oct  1 10:21:42 np0005464214 podman[317972]: 2025-10-01 14:21:42.56477168 +0000 UTC m=+0.106813034 container remove 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:21:42 np0005464214 systemd[1]: libpod-conmon-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope: Deactivated successfully.
Oct  1 10:21:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.243294451 +0000 UTC m=+0.045270099 container create ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:21:43 np0005464214 systemd[1]: Started libpod-conmon-ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9.scope.
Oct  1 10:21:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.21618467 +0000 UTC m=+0.018160338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.345801486 +0000 UTC m=+0.147777144 container init ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.356780045 +0000 UTC m=+0.158755733 container start ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:21:43 np0005464214 cranky_haslett[318145]: 167 167
Oct  1 10:21:43 np0005464214 systemd[1]: libpod-ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9.scope: Deactivated successfully.
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.371059418 +0000 UTC m=+0.173035066 container attach ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.371641377 +0000 UTC m=+0.173617075 container died ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  1 10:21:43 np0005464214 systemd[1]: var-lib-containers-storage-overlay-948fe2240f30d4efa527d546569ae693c8f392130033dab0c3c7b197708247b3-merged.mount: Deactivated successfully.
Oct  1 10:21:43 np0005464214 podman[318129]: 2025-10-01 14:21:43.488585581 +0000 UTC m=+0.290561269 container remove ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:21:43 np0005464214 systemd[1]: libpod-conmon-ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9.scope: Deactivated successfully.
Oct  1 10:21:43 np0005464214 nova_compute[260022]: 2025-10-01 14:21:43.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:43 np0005464214 podman[318173]: 2025-10-01 14:21:43.764622008 +0000 UTC m=+0.084544696 container create 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 10:21:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:43 np0005464214 podman[318173]: 2025-10-01 14:21:43.725093283 +0000 UTC m=+0.045016011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:21:43 np0005464214 systemd[1]: Started libpod-conmon-0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1.scope.
Oct  1 10:21:43 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:21:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:43 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:43 np0005464214 podman[318173]: 2025-10-01 14:21:43.931379034 +0000 UTC m=+0.251301742 container init 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 10:21:43 np0005464214 podman[318173]: 2025-10-01 14:21:43.942994283 +0000 UTC m=+0.262916971 container start 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:21:43 np0005464214 podman[318173]: 2025-10-01 14:21:43.964279719 +0000 UTC m=+0.284202437 container attach 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  1 10:21:44 np0005464214 angry_goodall[318190]: {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:    "0": [
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:        {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "devices": [
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "/dev/loop3"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            ],
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_name": "ceph_lv0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_size": "21470642176",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "name": "ceph_lv0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "tags": {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cluster_name": "ceph",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.crush_device_class": "",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.encrypted": "0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osd_id": "0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.type": "block",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.vdo": "0"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            },
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "type": "block",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "vg_name": "ceph_vg0"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:        }
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:    ],
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:    "1": [
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:        {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "devices": [
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "/dev/loop4"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            ],
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_name": "ceph_lv1",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_size": "21470642176",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "name": "ceph_lv1",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "tags": {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cluster_name": "ceph",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.crush_device_class": "",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.encrypted": "0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osd_id": "1",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.type": "block",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.vdo": "0"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            },
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "type": "block",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "vg_name": "ceph_vg1"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:        }
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:    ],
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:    "2": [
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:        {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "devices": [
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "/dev/loop5"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            ],
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_name": "ceph_lv2",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_size": "21470642176",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "name": "ceph_lv2",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "tags": {
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.cluster_name": "ceph",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.crush_device_class": "",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.encrypted": "0",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osd_id": "2",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.type": "block",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:                "ceph.vdo": "0"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            },
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "type": "block",
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:            "vg_name": "ceph_vg2"
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:        }
Oct  1 10:21:44 np0005464214 angry_goodall[318190]:    ]
Oct  1 10:21:44 np0005464214 angry_goodall[318190]: }
Oct  1 10:21:44 np0005464214 systemd[1]: libpod-0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1.scope: Deactivated successfully.
Oct  1 10:21:44 np0005464214 podman[318173]: 2025-10-01 14:21:44.720790456 +0000 UTC m=+1.040713174 container died 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  1 10:21:44 np0005464214 systemd[1]: var-lib-containers-storage-overlay-b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e-merged.mount: Deactivated successfully.
Oct  1 10:21:44 np0005464214 podman[318173]: 2025-10-01 14:21:44.778717546 +0000 UTC m=+1.098640234 container remove 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:21:44 np0005464214 systemd[1]: libpod-conmon-0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1.scope: Deactivated successfully.
Oct  1 10:21:45 np0005464214 nova_compute[260022]: 2025-10-01 14:21:45.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:45 np0005464214 nova_compute[260022]: 2025-10-01 14:21:45.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:21:45 np0005464214 nova_compute[260022]: 2025-10-01 14:21:45.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:21:45 np0005464214 nova_compute[260022]: 2025-10-01 14:21:45.381 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:21:45 np0005464214 nova_compute[260022]: 2025-10-01 14:21:45.382 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.611134944 +0000 UTC m=+0.083722590 container create 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  1 10:21:45 np0005464214 systemd[1]: Started libpod-conmon-7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910.scope.
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.567999894 +0000 UTC m=+0.040587630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:21:45 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.6928704 +0000 UTC m=+0.165458076 container init 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.703744856 +0000 UTC m=+0.176332492 container start 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:21:45 np0005464214 upbeat_goodall[318367]: 167 167
Oct  1 10:21:45 np0005464214 systemd[1]: libpod-7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910.scope: Deactivated successfully.
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.707307069 +0000 UTC m=+0.179894795 container attach 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.711106879 +0000 UTC m=+0.183694605 container died 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  1 10:21:45 np0005464214 systemd[1]: var-lib-containers-storage-overlay-229a64058ece69e19958f83daf55e906807f891585f2b01390634532cd62af81-merged.mount: Deactivated successfully.
Oct  1 10:21:45 np0005464214 podman[318351]: 2025-10-01 14:21:45.760427426 +0000 UTC m=+0.233015072 container remove 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:21:45 np0005464214 systemd[1]: libpod-conmon-7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910.scope: Deactivated successfully.
Oct  1 10:21:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:45 np0005464214 podman[318390]: 2025-10-01 14:21:45.97530107 +0000 UTC m=+0.037895044 container create 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 10:21:46 np0005464214 systemd[1]: Started libpod-conmon-1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d.scope.
Oct  1 10:21:46 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:21:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:46 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:21:46 np0005464214 podman[318390]: 2025-10-01 14:21:45.959762727 +0000 UTC m=+0.022356721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:21:46 np0005464214 podman[318390]: 2025-10-01 14:21:46.058947207 +0000 UTC m=+0.121541231 container init 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  1 10:21:46 np0005464214 podman[318390]: 2025-10-01 14:21:46.072195137 +0000 UTC m=+0.134789151 container start 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:21:46 np0005464214 podman[318390]: 2025-10-01 14:21:46.076637619 +0000 UTC m=+0.139231633 container attach 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 10:21:47 np0005464214 cool_nobel[318407]: {
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "osd_id": 0,
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "type": "bluestore"
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:    },
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "osd_id": 2,
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "type": "bluestore"
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:    },
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "osd_id": 1,
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:        "type": "bluestore"
Oct  1 10:21:47 np0005464214 cool_nobel[318407]:    }
Oct  1 10:21:47 np0005464214 cool_nobel[318407]: }
Oct  1 10:21:47 np0005464214 systemd[1]: libpod-1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d.scope: Deactivated successfully.
Oct  1 10:21:47 np0005464214 podman[318390]: 2025-10-01 14:21:47.021663073 +0000 UTC m=+1.084257047 container died 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  1 10:21:47 np0005464214 systemd[1]: var-lib-containers-storage-overlay-c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15-merged.mount: Deactivated successfully.
Oct  1 10:21:47 np0005464214 podman[318390]: 2025-10-01 14:21:47.073772339 +0000 UTC m=+1.136366353 container remove 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:21:47 np0005464214 systemd[1]: libpod-conmon-1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d.scope: Deactivated successfully.
Oct  1 10:21:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:21:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:21:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:21:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9849c233-6ed0-4ae1-9295-db46be10ea84 does not exist
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev f5104160-af60-4ab3-8971-e3f64f76b835 does not exist
Oct  1 10:21:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:21:47 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:21:47
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr']
Oct  1 10:21:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:21:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:21:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:21:49 np0005464214 nova_compute[260022]: 2025-10-01 14:21:49.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:21:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  1 10:21:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455288149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  1 10:21:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  1 10:21:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455288149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  1 10:21:55 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:57 np0005464214 podman[318505]: 2025-10-01 14:21:57.504676801 +0000 UTC m=+0.060312997 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:21:57 np0005464214 podman[318504]: 2025-10-01 14:21:57.504786085 +0000 UTC m=+0.063846690 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct  1 10:21:57 np0005464214 podman[318506]: 2025-10-01 14:21:57.504833606 +0000 UTC m=+0.056027930 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  1 10:21:57 np0005464214 podman[318503]: 2025-10-01 14:21:57.529527171 +0000 UTC m=+0.088485532 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  1 10:21:57 np0005464214 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  1 10:21:58 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:21:59 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:01 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:03 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:03 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:05 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:07 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:08 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:09 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:11 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:22:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:22:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:22:12.349 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:22:12 np0005464214 ovn_metadata_agent[161885]: 2025-10-01 14:22:12.349 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:22:13 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:13 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:15 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:22:17 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:22:18 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:19 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:21 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:23 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:23 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:25 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:27 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:28 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:28 np0005464214 nova_compute[260022]: 2025-10-01 14:22:28.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:28 np0005464214 podman[318581]: 2025-10-01 14:22:28.509510644 +0000 UTC m=+0.061044699 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:22:28 np0005464214 podman[318583]: 2025-10-01 14:22:28.529853461 +0000 UTC m=+0.067044351 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct  1 10:22:28 np0005464214 podman[318582]: 2025-10-01 14:22:28.542489332 +0000 UTC m=+0.081166709 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  1 10:22:28 np0005464214 podman[318580]: 2025-10-01 14:22:28.574671974 +0000 UTC m=+0.124142324 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  1 10:22:29 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:31 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.381 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.381 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:22:33 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:22:33 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300388876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:22:33 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:33 np0005464214 nova_compute[260022]: 2025-10-01 14:22:33.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.030 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.032 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5012MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.032 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.033 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.115 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.130 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.434 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  1 10:22:34 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  1 10:22:34 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946375056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.850 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.855 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.874 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.876 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  1 10:22:34 np0005464214 nova_compute[260022]: 2025-10-01 14:22:34.876 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  1 10:22:34 np0005464214 systemd-logind[818]: New session 56 of user zuul.
Oct  1 10:22:34 np0005464214 systemd[1]: Started Session 56 of User zuul.
Oct  1 10:22:35 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:37 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:38 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15119 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.218249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558218311, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1786, "num_deletes": 260, "total_data_size": 2875522, "memory_usage": 2910992, "flush_reason": "Manual Compaction"}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558265313, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2836157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46675, "largest_seqno": 48460, "table_properties": {"data_size": 2827808, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16713, "raw_average_key_size": 19, "raw_value_size": 2811159, "raw_average_value_size": 3358, "num_data_blocks": 229, "num_entries": 837, "num_filter_entries": 837, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328369, "oldest_key_time": 1759328369, "file_creation_time": 1759328558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 47147 microseconds, and 7402 cpu microseconds.
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.265401) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2836157 bytes OK
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.265474) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.268611) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.268637) EVENT_LOG_v1 {"time_micros": 1759328558268629, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.268661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2867872, prev total WAL file size 2867872, number of live WAL files 2.
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.270067) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373631' seq:72057594037927935, type:22 .. '6C6F676D0032303133' seq:0, type:0; will stop at (end)
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2769KB)], [110(8032KB)]
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558270138, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11061765, "oldest_snapshot_seqno": -1}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6508 keys, 10960929 bytes, temperature: kUnknown
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558340518, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 10960929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10915572, "index_size": 27967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 168519, "raw_average_key_size": 25, "raw_value_size": 10795860, "raw_average_value_size": 1658, "num_data_blocks": 1120, "num_entries": 6508, "num_filter_entries": 6508, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.340931) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10960929 bytes
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.342810) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(7.8) write-amplify(3.9) OK, records in: 7042, records dropped: 534 output_compression: NoCompression
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.342848) EVENT_LOG_v1 {"time_micros": 1759328558342831, "job": 66, "event": "compaction_finished", "compaction_time_micros": 70484, "compaction_time_cpu_micros": 26552, "output_level": 6, "num_output_files": 1, "total_output_size": 10960929, "num_input_records": 7042, "num_output_records": 6508, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558344173, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558347417, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.269980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:22:38 np0005464214 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  1 10:22:38 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15121 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:39 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  1 10:22:39 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762045296' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  1 10:22:39 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:40 np0005464214 nova_compute[260022]: 2025-10-01 14:22:40.877 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:40 np0005464214 nova_compute[260022]: 2025-10-01 14:22:40.878 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:40 np0005464214 nova_compute[260022]: 2025-10-01 14:22:40.878 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:40 np0005464214 nova_compute[260022]: 2025-10-01 14:22:40.878 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  1 10:22:41 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:42 np0005464214 ovs-vsctl[318988]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  1 10:22:43 np0005464214 virtqemud[260323]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  1 10:22:43 np0005464214 virtqemud[260323]: hostname: compute-0
Oct  1 10:22:43 np0005464214 virtqemud[260323]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  1 10:22:43 np0005464214 virtqemud[260323]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  1 10:22:43 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:43 np0005464214 virtqemud[260323]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  1 10:22:43 np0005464214 nova_compute[260022]: 2025-10-01 14:22:43.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:43 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:43 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: cache status {prefix=cache status} (starting...)
Oct  1 10:22:43 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: client ls {prefix=client ls} (starting...)
Oct  1 10:22:44 np0005464214 lvm[319329]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  1 10:22:44 np0005464214 lvm[319329]: VG ceph_vg2 finished
Oct  1 10:22:44 np0005464214 lvm[319353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  1 10:22:44 np0005464214 lvm[319353]: VG ceph_vg0 finished
Oct  1 10:22:44 np0005464214 lvm[319359]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  1 10:22:44 np0005464214 lvm[319359]: VG ceph_vg1 finished
Oct  1 10:22:44 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15125 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:44 np0005464214 kernel: block loop5: the capability attribute has been deprecated.
Oct  1 10:22:44 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: damage ls {prefix=damage ls} (starting...)
Oct  1 10:22:44 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:44 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump loads {prefix=dump loads} (starting...)
Oct  1 10:22:44 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  1 10:22:45 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  1 10:22:45 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  1 10:22:45 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  1 10:22:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  1 10:22:45 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203451135' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  1 10:22:45 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15133 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:45 np0005464214 ceph-mgr[75103]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 10:22:45 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T14:22:45.519+0000 7f13b53e1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 10:22:45 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  1 10:22:45 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  1 10:22:45 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:22:45 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3268758393' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:22:45 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  1 10:22:45 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561900908' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  1 10:22:46 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: ops {prefix=ops} (starting...)
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239454090' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  1 10:22:46 np0005464214 nova_compute[260022]: 2025-10-01 14:22:46.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:46 np0005464214 nova_compute[260022]: 2025-10-01 14:22:46.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  1 10:22:46 np0005464214 nova_compute[260022]: 2025-10-01 14:22:46.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  1 10:22:46 np0005464214 nova_compute[260022]: 2025-10-01 14:22:46.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  1 10:22:46 np0005464214 nova_compute[260022]: 2025-10-01 14:22:46.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716181558' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288882502' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  1 10:22:46 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: session ls {prefix=session ls} (starting...)
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  1 10:22:46 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80145924' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  1 10:22:46 np0005464214 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: status {prefix=status} (starting...)
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  1 10:22:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443807167' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15151 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 10:22:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954319479' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct  1 10:22:47 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  1 10:22:47 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285600281' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:22:47
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Oct  1 10:22:47 np0005464214 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994494705' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 1df42182-d274-41f1-a5cc-eefc373f49eb does not exist
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev e2512331-12f2-4704-b1bc-b660b8ee3eac does not exist
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 3d452249-2531-46fc-940d-ea17360d0de5 does not exist
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540409583' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  1 10:22:48 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409442186' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.761856861 +0000 UTC m=+0.046834469 container create f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  1 10:22:48 np0005464214 systemd[1]: Started libpod-conmon-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope.
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15163 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  1 10:22:48 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T14:22:48.830+0000 7f13b53e1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.737184548 +0000 UTC m=+0.022162196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:22:48 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.866557966 +0000 UTC m=+0.151535584 container init f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.875475149 +0000 UTC m=+0.160452747 container start f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.879692354 +0000 UTC m=+0.164669972 container attach f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  1 10:22:48 np0005464214 systemd[1]: libpod-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope: Deactivated successfully.
Oct  1 10:22:48 np0005464214 optimistic_leakey[320245]: 167 167
Oct  1 10:22:48 np0005464214 conmon[320245]: conmon f4f3f21461112af91aa8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope/container/memory.events
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.884044481 +0000 UTC m=+0.169022129 container died f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  1 10:22:48 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15165 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:48 np0005464214 systemd[1]: var-lib-containers-storage-overlay-a40b4b25b986483a24120b746ae56144c35c47eeebfd88ba8d80f76978c3fb7b-merged.mount: Deactivated successfully.
Oct  1 10:22:48 np0005464214 podman[320230]: 2025-10-01 14:22:48.935720053 +0000 UTC m=+0.220697661 container remove f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  1 10:22:48 np0005464214 systemd[1]: libpod-conmon-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope: Deactivated successfully.
Oct  1 10:22:49 np0005464214 podman[320317]: 2025-10-01 14:22:49.121197613 +0000 UTC m=+0.049771961 container create 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  1 10:22:49 np0005464214 systemd[1]: Started libpod-conmon-2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a.scope.
Oct  1 10:22:49 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:22:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:49 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:49 np0005464214 podman[320317]: 2025-10-01 14:22:49.104016498 +0000 UTC m=+0.032590866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:22:49 np0005464214 podman[320317]: 2025-10-01 14:22:49.209510059 +0000 UTC m=+0.138084417 container init 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  1 10:22:49 np0005464214 podman[320317]: 2025-10-01 14:22:49.215142067 +0000 UTC m=+0.143716415 container start 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  1 10:22:49 np0005464214 podman[320317]: 2025-10-01 14:22:49.218084191 +0000 UTC m=+0.146658539 container attach 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  1 10:22:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  1 10:22:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935810156' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  1 10:22:49 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15169 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:49 np0005464214 nova_compute[260022]: 2025-10-01 14:22:49.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  1 10:22:49 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15173 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:49 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  1 10:22:49 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996099488' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  1 10:22:49 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:50 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  1 10:22:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305062025' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  1 10:22:50 np0005464214 stoic_sammet[320334]: --> passed data devices: 0 physical, 3 LVM
Oct  1 10:22:50 np0005464214 stoic_sammet[320334]: --> relative data size: 1.0
Oct  1 10:22:50 np0005464214 stoic_sammet[320334]: --> All data devices are unavailable
Oct  1 10:22:50 np0005464214 systemd[1]: libpod-2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a.scope: Deactivated successfully.
Oct  1 10:22:50 np0005464214 podman[320317]: 2025-10-01 14:22:50.27789307 +0000 UTC m=+1.206467439 container died 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:22:50 np0005464214 systemd[1]: var-lib-containers-storage-overlay-bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e-merged.mount: Deactivated successfully.
Oct  1 10:22:50 np0005464214 podman[320317]: 2025-10-01 14:22:50.339381864 +0000 UTC m=+1.267956212 container remove 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  1 10:22:50 np0005464214 systemd[1]: libpod-conmon-2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a.scope: Deactivated successfully.
Oct  1 10:22:50 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15179 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6875 writes, 27K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6875 writes, 1441 syncs, 4.77 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 826 writes, 1986 keys, 826 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s#012Interval WAL: 826 writes, 369 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 216.638732910s of 216.647628784s, submitted: 13
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,1])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 26886144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 99.153511047s of 99.443305969s, submitted: 90
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 161 ms_handle_reset con 0x55b1b2762c00 session 0x55b1b1ce9680
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 26804224 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 162 ms_handle_reset con 0x55b1b0261800 session 0x55b1b1ce9860
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 26697728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 163 ms_handle_reset con 0x55b1b2762000 session 0x55b1b1ce9e00
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 26648576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 164 ms_handle_reset con 0x55b1b2762400 session 0x55b1b1cc9a40
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 26599424 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fbdbd000/0x0/0x4ffc00000, data 0xd6cff0/0xe61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 165 ms_handle_reset con 0x55b1b2762800 session 0x55b1b267f2c0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fbdb8000/0x0/0x4ffc00000, data 0xd6f8c6/0xe65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068358 data_alloc: 218103808 data_used: 327680
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 26599424 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 166 ms_handle_reset con 0x55b1b1475000 session 0x55b1b267f4a0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 26566656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 26566656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 167 ms_handle_reset con 0x55b1b0261800 session 0x55b1b2692b40
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 26550272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fbdab000/0x0/0x4ffc00000, data 0xd75f83/0xe72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 26542080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083078 data_alloc: 218103808 data_used: 344064
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.824281693s of 10.095853806s, submitted: 78
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 26492928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 168 ms_handle_reset con 0x55b1b2762000 session 0x55b1b26932c0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 26353664 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 169 ms_handle_reset con 0x55b1b2762800 session 0x55b1b26a5680
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 170 ms_handle_reset con 0x55b1b2762400 session 0x55b1b1c51860
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 26288128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 170 ms_handle_reset con 0x55b1b1474000 session 0x55b1b26a5e00
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 26222592 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 171 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1bc8000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 171 ms_handle_reset con 0x55b1b0261800 session 0x55b1b1e0b2c0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 26140672 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 172 ms_handle_reset con 0x55b1b2762000 session 0x55b1afe63a40
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fb191000/0x0/0x4ffc00000, data 0xd7ccd3/0xe79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097478 data_alloc: 218103808 data_used: 368640
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 26083328 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 173 ms_handle_reset con 0x55b1b2762800 session 0x55b1b27061e0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 173 ms_handle_reset con 0x55b1b2762400 session 0x55b1b1c06000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 26042368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb994000/0x0/0x4ffc00000, data 0xd7e48e/0xe79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 24961024 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb994000/0x0/0x4ffc00000, data 0xd7e48e/0xe79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 24961024 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 24961024 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101998 data_alloc: 218103808 data_used: 397312
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.224699020s of 10.097949028s, submitted: 242
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 23912448 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 175 ms_handle_reset con 0x55b1b0261800 session 0x55b1b27065a0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa7f0000/0x0/0x4ffc00000, data 0xd7ff2d/0xe7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa7ea000/0x0/0x4ffc00000, data 0xd835ac/0xe83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109709 data_alloc: 218103808 data_used: 405504
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa7ea000/0x0/0x4ffc00000, data 0xd835ac/0xe83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 177 ms_handle_reset con 0x55b1b1474000 session 0x55b1b2706b40
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 177 heartbeat osd_stat(store_statfs(0x4fa7e7000/0x0/0x4ffc00000, data 0xd8515a/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112484 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 177 heartbeat osd_stat(store_statfs(0x4fa7e7000/0x0/0x4ffc00000, data 0xd8515a/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112484 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 177 heartbeat osd_stat(store_statfs(0x4fa7e7000/0x0/0x4ffc00000, data 0xd8515a/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.135969162s of 17.270618439s, submitted: 89
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 23846912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 23846912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 23846912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 107.036346436s of 107.046646118s, submitted: 13
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 23830528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 179 ms_handle_reset con 0x55b1b2762000 session 0x55b1b2707860
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121695 data_alloc: 218103808 data_used: 421888
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 23822336 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 179 ms_handle_reset con 0x55b1b2762800 session 0x55b1b163b2c0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 179 ms_handle_reset con 0x55b1b1c81400 session 0x55b1af650d20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 179 heartbeat osd_stat(store_statfs(0x4fa7e1000/0x0/0x4ffc00000, data 0xd8874a/0xe8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: mgrc ms_handle_reset ms_handle_reset con 0x55b1b1628000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2102413293
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2102413293,v1:192.168.122.100:6801/2102413293]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: mgrc handle_mgr_configure stats_period=5
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 23642112 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 180 ms_handle_reset con 0x55b1b0261800 session 0x55b1b1c07c20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 180 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1afcf00
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 180 ms_handle_reset con 0x55b1b273f800 session 0x55b1b1e0a1e0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 23674880 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fa7de000/0x0/0x4ffc00000, data 0xd8a6c7/0xe90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 23748608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 181 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1af1e2000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 23724032 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 182 ms_handle_reset con 0x55b1b1c9e400 session 0x55b1b030da40
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132129 data_alloc: 218103808 data_used: 430080
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa7d8000/0x0/0x4ffc00000, data 0xd8da59/0xe94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa7d8000/0x0/0x4ffc00000, data 0xd8da59/0xe94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600206375s of 10.002857208s, submitted: 112
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 23666688 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134112 data_alloc: 218103808 data_used: 430080
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 183 ms_handle_reset con 0x55b1b0261800 session 0x55b1b2706d20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 23642112 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 184 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1c934a0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 185 heartbeat osd_stat(store_statfs(0x4fa7d0000/0x0/0x4ffc00000, data 0xd92c52/0xe9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141604 data_alloc: 218103808 data_used: 438272
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 185 heartbeat osd_stat(store_statfs(0x4fa7d0000/0x0/0x4ffc00000, data 0xd92c52/0xe9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8168 writes, 30K keys, 8168 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8168 writes, 2028 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1293 writes, 3208 keys, 1293 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s#012Interval WAL: 1293 writes, 587 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 346.016723633s of 346.200988770s, submitted: 82
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 23552000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 23502848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 427.182464600s of 427.830413818s, submitted: 90
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144426 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 187 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1af1adc20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa7ca000/0x0/0x4ffc00000, data 0xd96263/0xea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147832 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa7ca000/0x0/0x4ffc00000, data 0xd96263/0xea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 8417 writes, 30K keys, 8417 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8417 writes, 2145 syncs, 3.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 249 writes, 454 keys, 249 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s#012Interval WAL: 249 writes, 117 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 111.430877686s of 112.040237427s, submitted: 42
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 ms_handle_reset con 0x55b1b273f800 session 0x55b1b1c7b0e0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 ms_handle_reset con 0x55b1b1c9e800 session 0x55b1b1c07860
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161254 data_alloc: 218103808 data_used: 5115904
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161726 data_alloc: 218103808 data_used: 5115904
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 189 ms_handle_reset con 0x55b1b0261800 session 0x55b1afdb01e0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 189 heartbeat osd_stat(store_statfs(0x4fac36000/0x0/0x4ffc00000, data 0x929864/0xa36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.288143158s of 12.452685356s, submitted: 58
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 190 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1ae21e0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071425 data_alloc: 218103808 data_used: 466944
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 190 heartbeat osd_stat(store_statfs(0x4fb434000/0x0/0x4ffc00000, data 0x12b435/0x239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074399 data_alloc: 218103808 data_used: 466944
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 191 heartbeat osd_stat(store_statfs(0x4fb431000/0x0/0x4ffc00000, data 0x12ceb4/0x23c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 22175744 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 22175744 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077373 data_alloc: 218103808 data_used: 466944
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165143013s of 11.253076553s, submitted: 44
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x12e917/0x23f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 21110784 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1b122de00
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.367374420s of 35.459510803s, submitted: 20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 21078016 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 20971520 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 67.924308777s of 68.302268982s, submitted: 108
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 194 ms_handle_reset con 0x55b1b273f800 session 0x55b1b2692d20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091971 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 194 ms_handle_reset con 0x55b1b1c9ec00 session 0x55b1b26925a0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.234910965s of 27.315486908s, submitted: 7
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 20824064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 196 ms_handle_reset con 0x55b1b0261800 session 0x55b1b26923c0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x13576f/0x24c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x13576f/0x24c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095342 data_alloc: 218103808 data_used: 528384
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x13576f/0x24c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302708969' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 8971 writes, 31K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8971 writes, 2398 syncs, 3.74 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 554 writes, 1220 keys, 554 commit groups, 1.0 writes per commit group, ingest: 0.58 MB, 0.00 MB/s#012Interval WAL: 554 writes, 253 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 465.014099121s of 466.305084229s, submitted: 64
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 198 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1650d20
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 20742144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 20742144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 198 heartbeat osd_stat(store_statfs(0x4fb41c000/0x0/0x4ffc00000, data 0x138d93/0x251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 20742144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104575 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 199 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1af1ef0e0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fb419000/0x0/0x4ffc00000, data 0x13a964/0x254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103695 data_alloc: 218103808 data_used: 536576
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fb41a000/0x0/0x4ffc00000, data 0x13a964/0x254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.181988716s of 11.751212120s, submitted: 34
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 20676608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 ms_handle_reset con 0x55b1b273f800 session 0x55b1b1c503c0
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.502782822s of 27.594612122s, submitted: 33
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 20643840 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 20611072 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 20545536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 20512768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 20488192 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.144542694s of 23.247913361s, submitted: 90
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 202 ms_handle_reset con 0x55b1b1c9f000 session 0x55b1b26a5e00
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 202 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x13fb15/0x25d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 202 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x13fb15/0x25d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118181 data_alloc: 218103808 data_used: 552960
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118181 data_alloc: 218103808 data_used: 552960
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 202 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x13fb15/0x25d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120963 data_alloc: 218103808 data_used: 552960
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120963 data_alloc: 218103808 data_used: 552960
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 20324352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'config diff' '{prefix=config diff}'
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'config show' '{prefix=config show}'
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'counter dump' '{prefix=counter dump}'
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'counter schema' '{prefix=counter schema}'
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 19931136 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 19808256 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 19742720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:50 np0005464214 ceph-osd[90500]: do_command 'log dump' '{prefix=log dump}'
Oct  1 10:22:50 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15183 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:50 np0005464214 podman[320945]: 2025-10-01 14:22:50.981103565 +0000 UTC m=+0.044142593 container create 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 10:22:50 np0005464214 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  1 10:22:51 np0005464214 systemd[1]: Started libpod-conmon-571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9.scope.
Oct  1 10:22:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:22:51 np0005464214 podman[320945]: 2025-10-01 14:22:50.960652845 +0000 UTC m=+0.023691893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:22:51 np0005464214 podman[320945]: 2025-10-01 14:22:51.069534003 +0000 UTC m=+0.132573061 container init 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  1 10:22:51 np0005464214 podman[320945]: 2025-10-01 14:22:51.075804813 +0000 UTC m=+0.138843841 container start 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:22:51 np0005464214 pensive_kapitsa[320983]: 167 167
Oct  1 10:22:51 np0005464214 systemd[1]: libpod-571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9.scope: Deactivated successfully.
Oct  1 10:22:51 np0005464214 podman[320945]: 2025-10-01 14:22:51.08924851 +0000 UTC m=+0.152287538 container attach 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:22:51 np0005464214 podman[320945]: 2025-10-01 14:22:51.089602211 +0000 UTC m=+0.152641239 container died 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  1 10:22:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  1 10:22:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412661162' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  1 10:22:51 np0005464214 systemd[1]: var-lib-containers-storage-overlay-63cb7be7151d75fd8b56aa9d7dc3ec9163be9d274aace8164022a23b91ab3dae-merged.mount: Deactivated successfully.
Oct  1 10:22:51 np0005464214 podman[320945]: 2025-10-01 14:22:51.141892072 +0000 UTC m=+0.204931100 container remove 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:22:51 np0005464214 systemd[1]: libpod-conmon-571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9.scope: Deactivated successfully.
Oct  1 10:22:51 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  1 10:22:51 np0005464214 podman[321034]: 2025-10-01 14:22:51.302996319 +0000 UTC m=+0.045033962 container create dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  1 10:22:51 np0005464214 systemd[1]: Started libpod-conmon-dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12.scope.
Oct  1 10:22:51 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:22:51 np0005464214 podman[321034]: 2025-10-01 14:22:51.281108954 +0000 UTC m=+0.023146617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:22:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:51 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:51 np0005464214 podman[321034]: 2025-10-01 14:22:51.411546996 +0000 UTC m=+0.153584659 container init dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  1 10:22:51 np0005464214 podman[321034]: 2025-10-01 14:22:51.419476278 +0000 UTC m=+0.161513931 container start dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  1 10:22:51 np0005464214 podman[321034]: 2025-10-01 14:22:51.429695713 +0000 UTC m=+0.171733366 container attach dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:22:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  1 10:22:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812789747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  1 10:22:51 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15191 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  1 10:22:51 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:51 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  1 10:22:51 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3821354881' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  1 10:22:51 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15195 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]: {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:    "0": [
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:        {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "devices": [
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "/dev/loop3"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            ],
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_name": "ceph_lv0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_size": "21470642176",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "name": "ceph_lv0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "tags": {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cluster_name": "ceph",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.crush_device_class": "",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.encrypted": "0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osd_id": "0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.type": "block",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.vdo": "0"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            },
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "type": "block",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "vg_name": "ceph_vg0"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:        }
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:    ],
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:    "1": [
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:        {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "devices": [
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "/dev/loop4"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            ],
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_name": "ceph_lv1",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_size": "21470642176",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "name": "ceph_lv1",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "tags": {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cluster_name": "ceph",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.crush_device_class": "",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.encrypted": "0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osd_id": "1",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.type": "block",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.vdo": "0"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            },
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "type": "block",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "vg_name": "ceph_vg1"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:        }
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:    ],
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:    "2": [
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:        {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "devices": [
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "/dev/loop5"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            ],
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_name": "ceph_lv2",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_size": "21470642176",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "name": "ceph_lv2",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "tags": {
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cephx_lockbox_secret": "",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.cluster_name": "ceph",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.crush_device_class": "",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.encrypted": "0",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osd_id": "2",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.type": "block",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:                "ceph.vdo": "0"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            },
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "type": "block",
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:            "vg_name": "ceph_vg2"
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:        }
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]:    ]
Oct  1 10:22:52 np0005464214 happy_dubinsky[321055]: }
Oct  1 10:22:52 np0005464214 systemd[1]: libpod-dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12.scope: Deactivated successfully.
Oct  1 10:22:52 np0005464214 podman[321034]: 2025-10-01 14:22:52.149911148 +0000 UTC m=+0.891948811 container died dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  1 10:22:52 np0005464214 systemd[1]: var-lib-containers-storage-overlay-7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2-merged.mount: Deactivated successfully.
Oct  1 10:22:52 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  1 10:22:52 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997416852' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  1 10:22:52 np0005464214 podman[321034]: 2025-10-01 14:22:52.410820574 +0000 UTC m=+1.152858257 container remove dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:22:52 np0005464214 systemd[1]: libpod-conmon-dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12.scope: Deactivated successfully.
Oct  1 10:22:52 np0005464214 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  1 10:22:52 np0005464214 ceph-mgr[75103]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 10:22:52 np0005464214 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T14:22:52.726+0000 7f13b53e1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.073560783 +0000 UTC m=+0.043133021 container create 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  1 10:22:53 np0005464214 systemd[1]: Started libpod-conmon-924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174.scope.
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206076676' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.049322173 +0000 UTC m=+0.018894421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:22:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.172635939 +0000 UTC m=+0.142208207 container init 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.185679724 +0000 UTC m=+0.155252002 container start 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  1 10:22:53 np0005464214 modest_elion[321453]: 167 167
Oct  1 10:22:53 np0005464214 systemd[1]: libpod-924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174.scope: Deactivated successfully.
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.192512621 +0000 UTC m=+0.162084939 container attach 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.193019877 +0000 UTC m=+0.162592155 container died 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962468013' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  1 10:22:53 np0005464214 systemd[1]: var-lib-containers-storage-overlay-f0f5f0d7a307aea69f06273be7ef471412d5a3f365358f27e0b58692a2743439-merged.mount: Deactivated successfully.
Oct  1 10:22:53 np0005464214 podman[321437]: 2025-10-01 14:22:53.252517997 +0000 UTC m=+0.222090235 container remove 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:22:53 np0005464214 systemd[1]: libpod-conmon-924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174.scope: Deactivated successfully.
Oct  1 10:22:53 np0005464214 podman[321507]: 2025-10-01 14:22:53.418808768 +0000 UTC m=+0.049803623 container create 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  1 10:22:53 np0005464214 systemd[1]: Started libpod-conmon-8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736.scope.
Oct  1 10:22:53 np0005464214 systemd[1]: Started libcrun container.
Oct  1 10:22:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:53 np0005464214 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  1 10:22:53 np0005464214 podman[321507]: 2025-10-01 14:22:53.401056825 +0000 UTC m=+0.032051710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  1 10:22:53 np0005464214 podman[321507]: 2025-10-01 14:22:53.498351265 +0000 UTC m=+0.129346130 container init 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  1 10:22:53 np0005464214 podman[321507]: 2025-10-01 14:22:53.506968208 +0000 UTC m=+0.137963063 container start 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  1 10:22:53 np0005464214 podman[321507]: 2025-10-01 14:22:53.510929164 +0000 UTC m=+0.141924019 container attach 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701630211' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144850208' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  1 10:22:53 np0005464214 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  1 10:22:53 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2033895320' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490354223' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266640394' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]: {
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:    "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "osd_id": 0,
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "type": "bluestore"
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:    },
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:    "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "osd_id": 2,
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "type": "bluestore"
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:    },
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:    "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "osd_id": 1,
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:        "type": "bluestore"
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]:    }
Oct  1 10:22:54 np0005464214 stupefied_mccarthy[321540]: }
Oct  1 10:22:54 np0005464214 systemd[1]: libpod-8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736.scope: Deactivated successfully.
Oct  1 10:22:54 np0005464214 podman[321507]: 2025-10-01 14:22:54.434433286 +0000 UTC m=+1.065428131 container died 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842463792' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  1 10:22:54 np0005464214 systemd[1]: var-lib-containers-storage-overlay-6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb-merged.mount: Deactivated successfully.
Oct  1 10:22:54 np0005464214 podman[321507]: 2025-10-01 14:22:54.492690296 +0000 UTC m=+1.123685151 container remove 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  1 10:22:54 np0005464214 systemd[1]: libpod-conmon-8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736.scope: Deactivated successfully.
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct  1 10:22:54 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 706ee2d7-79b4-4f58-8d5b-a90b62ae4702 does not exist
Oct  1 10:22:54 np0005464214 ceph-mgr[75103]: [progress WARNING root] complete: ev 9acf9a75-20f7-4a14-beba-c8578f8043df does not exist
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143039329' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  1 10:22:54 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/260546938' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 23986176 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 7951 writes, 30K keys, 7951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7951 writes, 1749 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 740 writes, 1899 keys, 740 commit groups, 1.0 writes per commit group, ingest: 1.08 MB, 0.00 MB/s#012Interval WAL: 740 writes, 319 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 217.385223389s of 217.396347046s, submitted: 13
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23879680 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.143211365s of 99.444923401s, submitted: 90
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105670 data_alloc: 218103808 data_used: 393216
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 161 ms_handle_reset con 0x55f3e051d800 session 0x55f3e07763c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 22675456 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 162 ms_handle_reset con 0x55f3df702400 session 0x55f3e07b61e0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 22659072 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 163 ms_handle_reset con 0x55f3e051d800 session 0x55f3e07b65a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 22708224 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 164 ms_handle_reset con 0x55f3e066d400 session 0x55f3e05541e0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 22700032 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 165 ms_handle_reset con 0x55f3e066d800 session 0x55f3e05545a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 22667264 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 165 heartbeat osd_stat(store_statfs(0x4fb5a1000/0x0/0x4ffc00000, data 0x11702cf/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135560 data_alloc: 218103808 data_used: 409600
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 22642688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 166 ms_handle_reset con 0x55f3e066dc00 session 0x55f3e0776780
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 22634496 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb598000/0x0/0x4ffc00000, data 0x1173a1f/0x1275000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb598000/0x0/0x4ffc00000, data 0x1173a1f/0x1275000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 22601728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 167 ms_handle_reset con 0x55f3df702400 session 0x55f3e07b61e0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 22593536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x117559c/0x1278000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x117559c/0x1278000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 22568960 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791228294s of 10.066333771s, submitted: 80
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226051 data_alloc: 218103808 data_used: 430080
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 168 ms_handle_reset con 0x55f3e051d800 session 0x55f3e07c50e0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 10764288 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 169 ms_handle_reset con 0x55f3e066d800 session 0x55f3e07c5c20
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 19021824 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 170 ms_handle_reset con 0x55f3e066d400 session 0x55f3e079f2c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f63ef000/0x0/0x4ffc00000, data 0x5178ceb/0x527b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 170 ms_handle_reset con 0x55f3e051d400 session 0x55f3e081e780
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 18989056 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 171 ms_handle_reset con 0x55f3df702400 session 0x55f3e081ed20
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 171 ms_handle_reset con 0x55f3e051d800 session 0x55f3e08343c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 18907136 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 172 ms_handle_reset con 0x55f3e066d400 session 0x55f3e0837e00
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 18874368 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 172 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168325 data_alloc: 218103808 data_used: 442368
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 173 ms_handle_reset con 0x55f3e051d000 session 0x55f3e081e780
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 173 ms_handle_reset con 0x55f3e066d800 session 0x55f3e08352c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 173 heartbeat osd_stat(store_statfs(0x4fa3ee000/0x0/0x4ffc00000, data 0x117e5fe/0x127e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 18857984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 173 heartbeat osd_stat(store_statfs(0x4fa3ee000/0x0/0x4ffc00000, data 0x117e5fe/0x127e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 18857984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 18841600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 18841600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 174 heartbeat osd_stat(store_statfs(0x4fa3ea000/0x0/0x4ffc00000, data 0x118009d/0x1281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 18841600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.991490364s of 10.120883942s, submitted: 285
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170675 data_alloc: 218103808 data_used: 450560
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 175 ms_handle_reset con 0x55f3df702400 session 0x55f3e0523680
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fa3e8000/0x0/0x4ffc00000, data 0x1181c65/0x1285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fa3e8000/0x0/0x4ffc00000, data 0x1181c65/0x1285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175743 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 177 ms_handle_reset con 0x55f3e051d000 session 0x55f3e0523c20
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x11852ca/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178004 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x11852ca/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178004 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x11852ca/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.752992630s of 16.829257965s, submitted: 63
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  1 10:22:55 np0005464214 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238550680' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 ms_handle_reset con 0x55f3df702800 session 0x55f3e04c2f00
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: mgrc ms_handle_reset ms_handle_reset con 0x55f3dd734c00
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2102413293
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2102413293,v1:192.168.122.100:6801/2102413293]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: mgrc handle_mgr_configure stats_period=5
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 ms_handle_reset con 0x55f3e066c800 session 0x55f3e04d4f00
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 ms_handle_reset con 0x55f3e066cc00 session 0x55f3e04c30e0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 107.484535217s of 107.496047974s, submitted: 13
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 179 ms_handle_reset con 0x55f3e051c800 session 0x55f3e0854000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 18661376 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189769 data_alloc: 218103808 data_used: 462848
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 179 ms_handle_reset con 0x55f3e051c400 session 0x55f3e077da40
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 179 ms_handle_reset con 0x55f3de078800 session 0x55f3e05314a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 18661376 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fa3da000/0x0/0x4ffc00000, data 0x1188cf0/0x1293000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fa3da000/0x0/0x4ffc00000, data 0x1188cf0/0x1293000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 180 ms_handle_reset con 0x55f3df702400 session 0x55f3de5954a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 18604032 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 180 ms_handle_reset con 0x55f3e051c400 session 0x55f3e0776f00
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 180 ms_handle_reset con 0x55f3e051c800 session 0x55f3e04d45a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 18604032 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 180 heartbeat osd_stat(store_statfs(0x4fa3d6000/0x0/0x4ffc00000, data 0x118a8a0/0x1298000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 181 ms_handle_reset con 0x55f3e051d000 session 0x55f3dfee5860
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 18595840 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 182 ms_handle_reset con 0x55f3e0538000 session 0x55f3de0aa000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199159 data_alloc: 218103808 data_used: 471040
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fa3d1000/0x0/0x4ffc00000, data 0x118dbc9/0x1299000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.709639549s of 10.005803108s, submitted: 76
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fa3d4000/0x0/0x4ffc00000, data 0x118dbec/0x129a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 18538496 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 183 ms_handle_reset con 0x55f3df702400 session 0x55f3dd26b860
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204181 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 18530304 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 184 ms_handle_reset con 0x55f3e051c400 session 0x55f3dff43c20
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 184 heartbeat osd_stat(store_statfs(0x4fa3cd000/0x0/0x4ffc00000, data 0x1191343/0x129f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 185 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1192dc2/0x12a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208193 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 18513920 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 18513920 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 18513920 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 9156 writes, 34K keys, 9156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9156 writes, 2284 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1205 writes, 3436 keys, 1205 commit groups, 1.0 writes per commit group, ingest: 1.86 MB, 0.00 MB/s#012Interval WAL: 1205 writes, 535 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 346.045837402s of 346.194610596s, submitted: 63
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 18391040 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 427.017700195s of 427.785858154s, submitted: 90
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209109 data_alloc: 218103808 data_used: 483328
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 187 ms_handle_reset con 0x55f3e051c800 session 0x55f3e07c45a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210886 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9426 writes, 34K keys, 9426 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9426 writes, 2411 syncs, 3.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 270 writes, 502 keys, 270 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s#012Interval WAL: 270 writes, 127 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e051d000 session 0x55f3e07c52c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e051d800 session 0x55f3e08372c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e0538400 session 0x55f3e04ac1e0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e066d400 session 0x55f3e0522000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e051cc00 session 0x55f3e0837c20
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254980 data_alloc: 234881024 data_used: 14123008
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.243713379s of 120.506912231s, submitted: 53
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254106 data_alloc: 234881024 data_used: 14118912
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 189 ms_handle_reset con 0x55f3e051d000 session 0x55f3e08363c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 15040512 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 15040512 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 189 heartbeat osd_stat(store_statfs(0x4fb3c3000/0x0/0x4ffc00000, data 0x1998c3/0x2aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 15040512 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fb3c0000/0x0/0x4ffc00000, data 0x19b494/0x2ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 190 ms_handle_reset con 0x55f3e0538800 session 0x55f3dd86d860
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110437 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fb3c1000/0x0/0x4ffc00000, data 0x19b461/0x2ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110437 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.687598228s of 10.906754494s, submitted: 69
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 192 heartbeat osd_stat(store_statfs(0x4fb3bf000/0x0/0x4ffc00000, data 0x19cee0/0x2ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116193 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 ms_handle_reset con 0x55f3e0538c00 session 0x55f3e011a3c0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.812831879s of 39.855922699s, submitted: 37
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 14893056 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 14827520 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 67.956901550s of 68.342582703s, submitted: 110
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 14696448 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 194 ms_handle_reset con 0x55f3e0539000 session 0x55f3e0523c20
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 194 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04e3/0x2b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131474 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 ms_handle_reset con 0x55f3e051d000 session 0x55f3e1f9f4a0
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabb3000/0x0/0x4ffc00000, data 0x9a2093/0xaba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.289821625s of 27.663368225s, submitted: 38
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 196 ms_handle_reset con 0x55f3e0538400 session 0x55f3e0530780
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9a5804/0xac1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192230 data_alloc: 218103808 data_used: 512000
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9a5804/0xac1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9a5804/0xac1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct  1 10:22:55 np0005464214 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
